id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.10194
|
Dima Damen
|
Will Price, Carl Vondrick, Dima Damen
|
UnweaveNet: Unweaving Activity Stories
|
Accepted at IEEE/CVF Computer Vision and Pattern Recognition (CVPR)
2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Our lives can be seen as a complex weaving of activities; we switch from one
activity to another, to maximise our achievements or in reaction to demands
placed upon us. Observing a video of unscripted daily activities, we parse the
video into its constituent activity threads through a process we call
unweaving. To accomplish this, we introduce a video representation explicitly
capturing activity threads called a thread bank, along with a neural controller
capable of detecting goal changes and resuming of past activities, together
forming UnweaveNet. We train and evaluate UnweaveNet on sequences from the
unscripted egocentric dataset EPIC-KITCHENS. We propose and showcase the
efficacy of pretraining UnweaveNet in a self-supervised manner.
|
[
{
"created": "Sun, 19 Dec 2021 17:07:37 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2022 11:33:49 GMT",
"version": "v2"
}
] |
2022-04-05
|
[
[
"Price",
"Will",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Damen",
"Dima",
""
]
] |
Our lives can be seen as a complex weaving of activities; we switch from one activity to another, to maximise our achievements or in reaction to demands placed upon us. Observing a video of unscripted daily activities, we parse the video into its constituent activity threads through a process we call unweaving. To accomplish this, we introduce a video representation explicitly capturing activity threads called a thread bank, along with a neural controller capable of detecting goal changes and resuming of past activities, together forming UnweaveNet. We train and evaluate UnweaveNet on sequences from the unscripted egocentric dataset EPIC-KITCHENS. We propose and showcase the efficacy of pretraining UnweaveNet in a self-supervised manner.
|
1010.2993
|
Omur Ozel
|
Jing Yang, Omur Ozel, Sennur Ulukus
|
Broadcasting with an Energy Harvesting Rechargeable Transmitter
|
Submitted to IEEE Transactions on Wireless Communications, October
2010
| null |
10.1109/TWC.2011.120911.101813
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the transmission completion time minimization
problem in a two-user additive white Gaussian noise (AWGN) broadcast channel,
where the transmitter is able to harvest energy from the nature, using a
rechargeable battery. The harvested energy is modeled to arrive at the
transmitter randomly during the course of transmissions. The transmitter has a
fixed number of packets to be delivered to each receiver. Our goal is to
minimize the time by which all of the packets for both users are delivered to
their respective destinations. To this end, we optimize the transmit powers and
transmission rates intended for both users. We first analyze the structural
properties of the optimal transmission policy. We prove that the optimal total
transmit power has the same structure as the optimal single-user transmit
power. We also prove that there exists a cut-off power level for the stronger
user. If the optimal total transmit power is lower than this cut-off level, all
transmit power is allocated to the stronger user, and when the optimal total
transmit power is larger than this cut-off level, all transmit power above this
level is allocated to the weaker user. Based on these structural properties of
the optimal policy, we propose an algorithm that yields the globally optimal
off-line scheduling policy. Our algorithm is based on the idea of reducing the
two-user broadcast channel problem into a single-user problem as much as
possible.
|
[
{
"created": "Thu, 14 Oct 2010 17:58:12 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Yang",
"Jing",
""
],
[
"Ozel",
"Omur",
""
],
[
"Ulukus",
"Sennur",
""
]
] |
In this paper, we investigate the transmission completion time minimization problem in a two-user additive white Gaussian noise (AWGN) broadcast channel, where the transmitter is able to harvest energy from the nature, using a rechargeable battery. The harvested energy is modeled to arrive at the transmitter randomly during the course of transmissions. The transmitter has a fixed number of packets to be delivered to each receiver. Our goal is to minimize the time by which all of the packets for both users are delivered to their respective destinations. To this end, we optimize the transmit powers and transmission rates intended for both users. We first analyze the structural properties of the optimal transmission policy. We prove that the optimal total transmit power has the same structure as the optimal single-user transmit power. We also prove that there exists a cut-off power level for the stronger user. If the optimal total transmit power is lower than this cut-off level, all transmit power is allocated to the stronger user, and when the optimal total transmit power is larger than this cut-off level, all transmit power above this level is allocated to the weaker user. Based on these structural properties of the optimal policy, we propose an algorithm that yields the globally optimal off-line scheduling policy. Our algorithm is based on the idea of reducing the two-user broadcast channel problem into a single-user problem as much as possible.
|
2212.00423
|
Kim Bjerge
|
Kim Bjerge, Carsten Eie Frigaard and Henrik Karstoft
|
Motion Informed Object Detection of Small Insects in Time-lapse Camera
Recordings
|
10 pages, 6 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Insects as pollinators play a crucial role in ecosystem management and world
food production. However, insect populations are declining, calling for
efficient methods of insect monitoring. Existing methods analyze video or
time-lapse images of insects in nature, but the analysis is challenging since
insects are small objects in complex and dynamic scenes of natural vegetation.
In this work, we provide a dataset of primary honeybees visiting three
different plant species during two months of the summer period. The dataset
consists of 107,387 annotated time-lapse images from multiple cameras,
including 9,423 annotated insects. We present a method pipeline for detecting
insects in time-lapse RGB images. The pipeline consists of a two-step process.
Firstly, the time-lapse RGB images are preprocessed to enhance insects in the
images. This Motion-Informed-Enhancement technique uses motion and colors to
enhance insects in images. Secondly, the enhanced images are subsequently fed
into a Convolutional Neural network (CNN) object detector. The method improves
the deep learning object detectors You Only Look Once (YOLO) and Faster
Region-based CNN (Faster R-CNN). Using Motion-Informed-Enhancement, the
YOLO-detector improves the average micro F1-score from 0.49 to 0.71, and the
Faster R-CNN-detector improves the average micro F1-score from 0.32 to 0.56 on
the dataset. Our dataset and proposed method provide a step forward to automate
the time-lapse camera monitoring of flying insects. The dataset is published
on: https://vision.eng.au.dk/mie/
|
[
{
"created": "Thu, 1 Dec 2022 10:54:06 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jun 2023 15:01:00 GMT",
"version": "v2"
}
] |
2023-06-30
|
[
[
"Bjerge",
"Kim",
""
],
[
"Frigaard",
"Carsten Eie",
""
],
[
"Karstoft",
"Henrik",
""
]
] |
Insects as pollinators play a crucial role in ecosystem management and world food production. However, insect populations are declining, calling for efficient methods of insect monitoring. Existing methods analyze video or time-lapse images of insects in nature, but the analysis is challenging since insects are small objects in complex and dynamic scenes of natural vegetation. In this work, we provide a dataset of primary honeybees visiting three different plant species during two months of the summer period. The dataset consists of 107,387 annotated time-lapse images from multiple cameras, including 9,423 annotated insects. We present a method pipeline for detecting insects in time-lapse RGB images. The pipeline consists of a two-step process. Firstly, the time-lapse RGB images are preprocessed to enhance insects in the images. This Motion-Informed-Enhancement technique uses motion and colors to enhance insects in images. Secondly, the enhanced images are subsequently fed into a Convolutional Neural network (CNN) object detector. The method improves the deep learning object detectors You Only Look Once (YOLO) and Faster Region-based CNN (Faster R-CNN). Using Motion-Informed-Enhancement, the YOLO-detector improves the average micro F1-score from 0.49 to 0.71, and the Faster R-CNN-detector improves the average micro F1-score from 0.32 to 0.56 on the dataset. Our dataset and proposed method provide a step forward to automate the time-lapse camera monitoring of flying insects. The dataset is published on: https://vision.eng.au.dk/mie/
|
2406.11245
|
Qiong Wu
|
Kangwei Qi, Qiong Wu, Pingyi Fan, Nan Cheng, Wen Chen, Jiangzhou Wang
and Khaled B. Letaief
|
Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for
RIS-Aided IoV Networks
|
This paper has been submitted to IEEE Journal. The source code has
been released at https://github.com/qiongwu86/RIS-RB-AoI-V2X-DRL.git
| null | null | null |
cs.LG cs.DC cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable Intelligent Surface (RIS) is a pivotal technology in
communication, offering an alternative path that significantly enhances the
link quality in wireless communication environments. In this paper, we propose
a RIS-assisted internet of vehicles (IoV) network, considering the
vehicle-to-everything (V2X) communication method. In addition, in order to
improve the timeliness of vehicle-to-infrastructure (V2I) links and the
stability of vehicle-to-vehicle (V2V) links, we introduce the age of
information (AoI) model and the payload transmission probability model.
Therefore, with the objective of minimizing the AoI of V2I links and
prioritizing transmission of V2V links payload, we construct this optimization
problem as an Markov decision process (MDP) problem in which the BS serves as
an agent to allocate resources and control phase-shift for the vehicles using
the soft actor-critic (SAC) algorithm, which gradually converges and maintains
a high stability. A AoI-aware joint vehicular resource allocation and RIS
phase-shift control scheme based on SAC algorithm is proposed and simulation
results show that its convergence speed, cumulative reward, AoI performance,
and payload transmission probability outperforms those of proximal policy
optimization (PPO), deep deterministic policy gradient (DDPG), twin delayed
deep deterministic policy gradient (TD3) and stochastic algorithms.
|
[
{
"created": "Mon, 17 Jun 2024 06:16:07 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Qi",
"Kangwei",
""
],
[
"Wu",
"Qiong",
""
],
[
"Fan",
"Pingyi",
""
],
[
"Cheng",
"Nan",
""
],
[
"Chen",
"Wen",
""
],
[
"Wang",
"Jiangzhou",
""
],
[
"Letaief",
"Khaled B.",
""
]
] |
Reconfigurable Intelligent Surface (RIS) is a pivotal technology in communication, offering an alternative path that significantly enhances the link quality in wireless communication environments. In this paper, we propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method. In addition, in order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model. Therefore, with the objective of minimizing the AoI of V2I links and prioritizing transmission of V2V links payload, we construct this optimization problem as an Markov decision process (MDP) problem in which the BS serves as an agent to allocate resources and control phase-shift for the vehicles using the soft actor-critic (SAC) algorithm, which gradually converges and maintains a high stability. A AoI-aware joint vehicular resource allocation and RIS phase-shift control scheme based on SAC algorithm is proposed and simulation results show that its convergence speed, cumulative reward, AoI performance, and payload transmission probability outperforms those of proximal policy optimization (PPO), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3) and stochastic algorithms.
|
1610.08167
|
EPTCS
|
Vladimir Klebanov (KIT), Alexander Weigl (KIT), J\"org Weisbarth
|
Sound Probabilistic #SAT with Projection
|
In Proceedings QAPL'16, arXiv:1610.07696
|
EPTCS 227, 2016, pp. 15-29
|
10.4204/EPTCS.227.2
| null |
cs.LO cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an improved method for a sound probabilistic estimation of the
model count of a boolean formula under projection. The problem solved can be
used to encode a variety of quantitative program analyses, such as concerning
security of resource consumption. We implement the technique and discuss its
application to quantifying information flow in programs.
|
[
{
"created": "Wed, 26 Oct 2016 05:00:04 GMT",
"version": "v1"
}
] |
2016-10-27
|
[
[
"Klebanov",
"Vladimir",
"",
"KIT"
],
[
"Weigl",
"Alexander",
"",
"KIT"
],
[
"Weisbarth",
"Jörg",
""
]
] |
We present an improved method for a sound probabilistic estimation of the model count of a boolean formula under projection. The problem solved can be used to encode a variety of quantitative program analyses, such as concerning security of resource consumption. We implement the technique and discuss its application to quantifying information flow in programs.
|
1309.2399
|
Pavel Klav\'ik
|
Steven Chaplick, Radoslav Fulek, Pavel Klav\'ik
|
Extending Partial Representations of Circle Graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The partial representation extension problem is a recently introduced
generalization of the recognition problem. A circle graph is an intersection
graph of chords of a circle. We study the partial representation extension
problem for circle graphs, where the input consists of a graph $G$ and a
partial representation $\cal R'$ giving some pre-drawn chords that represent an
induced subgraph of $G$. The question is whether one can extend $\cal R'$ to a
representation $\cal R$ of the entire graph $G$, i.e., whether one can draw the
remaining chords into a partially pre-drawn representation to obtain a
representation of $G$. Our main result is an $O(n^3)$ time algorithm for
partial representation extension of circle graphs, where $n$ is the number of
vertices. To show this, we describe the structure of all representations of a
circle graph using split decomposition. This can be of independent interest.
|
[
{
"created": "Tue, 10 Sep 2013 07:50:41 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Sep 2017 20:27:30 GMT",
"version": "v2"
}
] |
2017-10-03
|
[
[
"Chaplick",
"Steven",
""
],
[
"Fulek",
"Radoslav",
""
],
[
"Klavík",
"Pavel",
""
]
] |
The partial representation extension problem is a recently introduced generalization of the recognition problem. A circle graph is an intersection graph of chords of a circle. We study the partial representation extension problem for circle graphs, where the input consists of a graph $G$ and a partial representation $\cal R'$ giving some pre-drawn chords that represent an induced subgraph of $G$. The question is whether one can extend $\cal R'$ to a representation $\cal R$ of the entire graph $G$, i.e., whether one can draw the remaining chords into a partially pre-drawn representation to obtain a representation of $G$. Our main result is an $O(n^3)$ time algorithm for partial representation extension of circle graphs, where $n$ is the number of vertices. To show this, we describe the structure of all representations of a circle graph using split decomposition. This can be of independent interest.
|
2305.16174
|
Claudio Battiloro Mr
|
Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein,
Simone Scardapane, Paolo Di Lorenzo
|
From Latent Graph to Latent Topology Inference: Differentiable Cell
Complex Module
|
Under review. 17 pages, 5 figures
| null | null | null |
cs.LG cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks
(GNNs) on a given graph topology by dynamically learning it. However, most of
LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph
to rewire and can solely learn regular graph topologies. In the wake of the
success of Topological Deep Learning (TDL), we study Latent Topology Inference
(LTI) for learning higher-order cell complexes (with sparse and not regular
topology) describing multi-way interactions between data points. To this aim,
we introduce the Differentiable Cell Complex Module (DCM), a novel learnable
function that computes cell probabilities in the complex to improve the
downstream task. We show how to integrate DCM with cell complex message passing
networks layers and train it in a end-to-end fashion, thanks to a two-step
inference procedure that avoids an exhaustive search across all possible cells
in the input, thus maintaining scalability. Our model is tested on several
homophilic and heterophilic graph datasets and it is shown to outperform other
state-of-the-art techniques, offering significant improvements especially in
cases where an input graph is not provided.
|
[
{
"created": "Thu, 25 May 2023 15:33:19 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Aug 2023 13:46:09 GMT",
"version": "v2"
}
] |
2023-08-04
|
[
[
"Battiloro",
"Claudio",
""
],
[
"Spinelli",
"Indro",
""
],
[
"Telyatnikov",
"Lev",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Scardapane",
"Simone",
""
],
[
"Di Lorenzo",
"Paolo",
""
]
] |
Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it. However, most of LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph to rewire and can solely learn regular graph topologies. In the wake of the success of Topological Deep Learning (TDL), we study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points. To this aim, we introduce the Differentiable Cell Complex Module (DCM), a novel learnable function that computes cell probabilities in the complex to improve the downstream task. We show how to integrate DCM with cell complex message passing networks layers and train it in a end-to-end fashion, thanks to a two-step inference procedure that avoids an exhaustive search across all possible cells in the input, thus maintaining scalability. Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided.
|
2002.01642
|
Osman Tursun
|
Osman Tursun, Simon Denman, Sridha Sridharan and Clinton Fookes
|
Learning Test-time Augmentation for Content-based Image Retrieval
| null | null |
10.1016/j.cviu.2022.103494
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Off-the-shelf convolutional neural network features achieve outstanding
results in many image retrieval tasks. However, their invariance to target data
is pre-defined by the network architecture and training data. Existing image
retrieval approaches require fine-tuning or modification of pre-trained
networks to adapt to variations unique to the target data. In contrast, our
method enhances the invariance of off-the-shelf features by aggregating
features extracted from images augmented at test-time, with augmentations
guided by a policy learned through reinforcement learning. The learned policy
assigns different magnitudes and weights to the selected transformations, which
are selected from a list of image transformations. Policies are evaluated using
a metric learning protocol to learn the optimal policy. The model converges
quickly and the cost of each policy iteration is minimal as we propose an
off-line caching technique to greatly reduce the computational cost of
extracting features from augmented images. Experimental results on large
trademark retrieval (METU trademark dataset) and landmark retrieval (ROxford5k
and RParis6k scene datasets) tasks show that the learned ensemble of
transformations is highly effective for improving performance, and is
practical, and transferable.
|
[
{
"created": "Wed, 5 Feb 2020 05:08:41 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Dec 2020 07:54:06 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2021 06:32:56 GMT",
"version": "v3"
},
{
"created": "Thu, 12 Aug 2021 06:05:48 GMT",
"version": "v4"
},
{
"created": "Tue, 5 Jul 2022 04:28:32 GMT",
"version": "v5"
}
] |
2022-07-26
|
[
[
"Tursun",
"Osman",
""
],
[
"Denman",
"Simon",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Fookes",
"Clinton",
""
]
] |
Off-the-shelf convolutional neural network features achieve outstanding results in many image retrieval tasks. However, their invariance to target data is pre-defined by the network architecture and training data. Existing image retrieval approaches require fine-tuning or modification of pre-trained networks to adapt to variations unique to the target data. In contrast, our method enhances the invariance of off-the-shelf features by aggregating features extracted from images augmented at test-time, with augmentations guided by a policy learned through reinforcement learning. The learned policy assigns different magnitudes and weights to the selected transformations, which are selected from a list of image transformations. Policies are evaluated using a metric learning protocol to learn the optimal policy. The model converges quickly and the cost of each policy iteration is minimal as we propose an off-line caching technique to greatly reduce the computational cost of extracting features from augmented images. Experimental results on large trademark retrieval (METU trademark dataset) and landmark retrieval (ROxford5k and RParis6k scene datasets) tasks show that the learned ensemble of transformations is highly effective for improving performance, and is practical, and transferable.
|
2302.00894
|
Nankai Lin
|
Xiaotian Lin, Nankai Lin, Yingwen Fu, Ziyu Yang and Shengyi Jiang
|
How to choose "Good" Samples for Text Data Augmentation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning-based text classification models need abundant labeled data to
obtain competitive performance. Unfortunately, annotating large-size corpus is
time-consuming and laborious. To tackle this, multiple researches try to use
data augmentation to expand the corpus size. However, data augmentation may
potentially produce some noisy augmented samples. There are currently no works
exploring sample selection for augmented samples in nature language processing
field. In this paper, we propose a novel self-training selection framework with
two selectors to select the high-quality samples from data augmentation.
Specifically, we firstly use an entropy-based strategy and the model prediction
to select augmented samples. Considering some samples with high quality at the
above step may be wrongly filtered, we propose to recall them from two
perspectives of word overlap and semantic similarity. Experimental results show
the effectiveness and simplicity of our framework.
|
[
{
"created": "Thu, 2 Feb 2023 06:01:50 GMT",
"version": "v1"
}
] |
2023-02-03
|
[
[
"Lin",
"Xiaotian",
""
],
[
"Lin",
"Nankai",
""
],
[
"Fu",
"Yingwen",
""
],
[
"Yang",
"Ziyu",
""
],
[
"Jiang",
"Shengyi",
""
]
] |
Deep learning-based text classification models need abundant labeled data to obtain competitive performance. Unfortunately, annotating large-size corpus is time-consuming and laborious. To tackle this, multiple researches try to use data augmentation to expand the corpus size. However, data augmentation may potentially produce some noisy augmented samples. There are currently no works exploring sample selection for augmented samples in nature language processing field. In this paper, we propose a novel self-training selection framework with two selectors to select the high-quality samples from data augmentation. Specifically, we firstly use an entropy-based strategy and the model prediction to select augmented samples. Considering some samples with high quality at the above step may be wrongly filtered, we propose to recall them from two perspectives of word overlap and semantic similarity. Experimental results show the effectiveness and simplicity of our framework.
|
2208.04632
|
EPTCS
|
Luc Edixhoven (Open University (Heerlen) and CWI (Amsterdam),
Netherlands), Sung-Shik Jongmans (Open University (Heerlen) and CWI
(Amsterdam), Netherlands), Jos\'e Proen\c{c}a (CISTER, ISEP, Polytechnic
Institute of Porto, Portugal), Guillermina Cledou (HASLab, INESC TEC and
University of Minho, Portugal)
|
Branching Pomsets for Choreographies
|
In Proceedings ICE 2022, arXiv:2208.04086
|
EPTCS 365, 2022, pp. 37-52
|
10.4204/EPTCS.365.3
| null |
cs.PL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Choreographic languages describe possible sequences of interactions among a
set of agents. Typical models are based on languages or automata over sending
and receiving actions. Pomsets provide a more compact alternative by using a
partial order over these actions and by not making explicit the possible
interleaving of concurrent actions. However, pomsets offer no compact
representation of choices. For example, if an agent Alice can send one of two
possible messages to Bob three times, one would need a set of 2 * 2 * 2
distinct pomsets to represent all possible branches of Alice's behaviour. This
paper proposes an extension of pomsets, named branching pomsets, with a
branching structure that can represent Alice's behaviour using 2 + 2 + 2
ordered actions. We encode choreographies as branching pomsets and show that
the pomset semantics of the encoded choreographies are bisimilar to their
operational semantics.
|
[
{
"created": "Tue, 9 Aug 2022 09:53:35 GMT",
"version": "v1"
}
] |
2022-08-10
|
[
[
"Edixhoven",
"Luc",
"",
"Open University"
],
[
"Jongmans",
"Sung-Shik",
"",
"Open University"
],
[
"Proença",
"José",
"",
"CISTER, ISEP, Polytechnic\n Institute of Porto, Portugal"
],
[
"Cledou",
"Guillermina",
"",
"HASLab, INESC TEC and\n University of Minho, Portugal"
]
] |
Choreographic languages describe possible sequences of interactions among a set of agents. Typical models are based on languages or automata over sending and receiving actions. Pomsets provide a more compact alternative by using a partial order over these actions and by not making explicit the possible interleaving of concurrent actions. However, pomsets offer no compact representation of choices. For example, if an agent Alice can send one of two possible messages to Bob three times, one would need a set of 2 * 2 * 2 distinct pomsets to represent all possible branches of Alice's behaviour. This paper proposes an extension of pomsets, named branching pomsets, with a branching structure that can represent Alice's behaviour using 2 + 2 + 2 ordered actions. We encode choreographies as branching pomsets and show that the pomset semantics of the encoded choreographies are bisimilar to their operational semantics.
|
2407.11549
|
Yin Jou Huang
|
Yin Jou Huang and Rafik Hadfi
|
How Personality Traits Influence Negotiation Outcomes? A Simulation
based on Large Language Models
|
13 pages, 4 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Psychological evidence reveals the influence of personality traits on
decision-making. For instance, agreeableness is generally associated with
positive outcomes in negotiations, whereas neuroticism is often linked to less
favorable outcomes. This paper introduces a simulation framework centered on
Large Language Model (LLM) agents endowed with synthesized personality traits.
The agents negotiate within bargaining domains and possess customizable
personalities and objectives. The experimental results show that the behavioral
tendencies of LLM-based simulations could reproduce behavioral patterns
observed in human negotiations. The contribution is twofold. First, we propose
a simulation methodology that investigates the alignment between the linguistic
and economic capabilities of LLM agents. Secondly, we offer empirical insights
into the strategic impact of Big-Five personality traits on the outcomes of
bilateral negotiations. We also provide a case study based on synthesized
bargaining dialogues to reveal intriguing behaviors, including deceitful and
compromising behaviors.
|
[
{
"created": "Tue, 16 Jul 2024 09:52:51 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Huang",
"Yin Jou",
""
],
[
"Hadfi",
"Rafik",
""
]
] |
Psychological evidence reveals the influence of personality traits on decision-making. For instance, agreeableness is generally associated with positive outcomes in negotiations, whereas neuroticism is often linked to less favorable outcomes. This paper introduces a simulation framework centered on Large Language Model (LLM) agents endowed with synthesized personality traits. The agents negotiate within bargaining domains and possess customizable personalities and objectives. The experimental results show that the behavioral tendencies of LLM-based simulations could reproduce behavioral patterns observed in human negotiations. The contribution is twofold. First, we propose a simulation methodology that investigates the alignment between the linguistic and economic capabilities of LLM agents. Secondly, we offer empirical insights into the strategic impact of Big-Five personality traits on the outcomes of bilateral negotiations. We also provide a case study based on synthesized bargaining dialogues to reveal intriguing behaviors, including deceitful and compromising behaviors.
|
2405.01144
|
Niousha Nazemi
|
Niousha Nazemi, Omid Tavallaie, Shuaijun Chen, Albert Y. Zomaya, Ralph
Holz
|
Boosting Communication Efficiency of Federated Learning's Secure
Aggregation
|
2 pages, 4 figures, The 54th Annual IEEE/IFIP International
Conference on Dependable Systems and Networks
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL) is a decentralized machine learning approach where
client devices train models locally and send them to a server that performs
aggregation to generate a global model. FL is vulnerable to model inversion
attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue
by masking each client's trained model using shared secrets and individual
elements generated locally on the client's device. Although SecAgg effectively
preserves privacy, it imposes considerable communication and computation
overhead, especially as network size increases. Building upon SecAgg, this
poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol
that substantially reduces this overhead by using only two shared secrets per
client to mask the model. We propose our method for stable networks with low
delay variation and limited client dropouts. CESA is independent of the data
distribution and network size (for higher than 6 nodes), preventing the
honest-but-curious server from accessing unmasked models. Our initial
evaluation reveals that CESA significantly reduces the communication cost
compared to SecAgg.
|
[
{
"created": "Thu, 2 May 2024 10:00:16 GMT",
"version": "v1"
}
] |
2024-05-03
|
[
[
"Nazemi",
"Niousha",
""
],
[
"Tavallaie",
"Omid",
""
],
[
"Chen",
"Shuaijun",
""
],
[
"Zomaya",
"Albert Y.",
""
],
[
"Holz",
"Ralph",
""
]
] |
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server that performs aggregation to generate a global model. FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models. Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model using shared secrets and individual elements generated locally on the client's device. Although SecAgg effectively preserves privacy, it imposes considerable communication and computation overhead, especially as network size increases. Building upon SecAgg, this poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead by using only two shared secrets per client to mask the model. We propose our method for stable networks with low delay variation and limited client dropouts. CESA is independent of the data distribution and network size (for higher than 6 nodes), preventing the honest-but-curious server from accessing unmasked models. Our initial evaluation reveals that CESA significantly reduces the communication cost compared to SecAgg.
|
1604.02714
|
Krasimir Yordzhev
|
Krasimir Yordzhev
|
Canonical binary matrices related to bipartite graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The current paper is dedicated to the problem of finding the number of
mutually non isomorphic bipartite graphs of the type $g=\langle R_g ,C_g ,E_g
\rangle$ at given $n=|R_g |$ and $m=|C_g |$, where $R_g$ and $C_g$ are the two
disjoint parts of the vertices of the graphs $g$, and $E_g$ is the set of
edges, $Eg \subseteq R_g \times C_g$. For this purpose, the concept of
canonical binary matrix is introduced. The different canonical matrices
unambiguously describe the different with exactness up to isomorphism bipartite
graphs. We have found a necessary and sufficient condition an arbitrary matrix
to be canonical. This condition could be the base for realizing recursive
algorithm for finding all $n \times m$ canonical binary matrices and
consequently for finding all with exactness up to isomorphism binary matrices
with cardinality of each part equal to $n$ and $m$.
|
[
{
"created": "Sun, 10 Apr 2016 16:51:48 GMT",
"version": "v1"
}
] |
2016-04-12
|
[
[
"Yordzhev",
"Krasimir",
""
]
] |
The current paper is dedicated to the problem of finding the number of mutually non isomorphic bipartite graphs of the type $g=\langle R_g ,C_g ,E_g \rangle$ at given $n=|R_g |$ and $m=|C_g |$, where $R_g$ and $C_g$ are the two disjoint parts of the vertices of the graphs $g$, and $E_g$ is the set of edges, $Eg \subseteq R_g \times C_g$. For this purpose, the concept of canonical binary matrix is introduced. The different canonical matrices unambiguously describe the different with exactness up to isomorphism bipartite graphs. We have found a necessary and sufficient condition an arbitrary matrix to be canonical. This condition could be the base for realizing recursive algorithm for finding all $n \times m$ canonical binary matrices and consequently for finding all with exactness up to isomorphism binary matrices with cardinality of each part equal to $n$ and $m$.
|
2307.00527
|
Zhong Li
|
Zhong Li, Jiayang Shi, Matthijs van Leeuwen
|
Graph Neural Networks based Log Anomaly Detection and Explanation
|
Technical Report (A short version was accepted by ICSE'24 poster
track)
| null | null | null |
cs.SE cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Event logs are widely used to record the status of high-tech systems, making
log anomaly detection important for monitoring those systems. Most existing log
anomaly detection methods take a log event count matrix or log event sequences
as input, exploiting quantitative and/or sequential relationships between log
events to detect anomalies. Unfortunately, only considering quantitative or
sequential relationships may result in low detection accuracy. To alleviate
this problem, we propose a graph-based method for unsupervised log anomaly
detection, dubbed Logs2Graphs, which first converts event logs into attributed,
directed, and weighted graphs, and then leverages graph neural networks to
perform graph-level anomaly detection. Specifically, we introduce One-Class
Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph
neural network model for detecting graph-level anomalies in a collection of
attributed, directed, and weighted graphs. By coupling the graph representation
and anomaly detection steps, OCDiGCN can learn a representation that is
especially suited for anomaly detection, resulting in a high detection
accuracy. Importantly, for each identified anomaly, we additionally provide a
small subset of nodes that play a crucial role in OCDiGCN's prediction as
explanations, which can offer valuable cues for subsequent root cause
diagnosis. Experiments on five benchmark datasets show that Logs2Graphs
performs at least on par with state-of-the-art log anomaly detection methods on
simple datasets while largely outperforming state-of-the-art log anomaly
detection methods on complicated datasets.
|
[
{
"created": "Sun, 2 Jul 2023 09:38:43 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Oct 2023 08:58:34 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jan 2024 10:48:54 GMT",
"version": "v3"
}
] |
2024-01-25
|
[
[
"Li",
"Zhong",
""
],
[
"Shi",
"Jiayang",
""
],
[
"van Leeuwen",
"Matthijs",
""
]
] |
Event logs are widely used to record the status of high-tech systems, making log anomaly detection important for monitoring those systems. Most existing log anomaly detection methods take a log event count matrix or log event sequences as input, exploiting quantitative and/or sequential relationships between log events to detect anomalies. Unfortunately, only considering quantitative or sequential relationships may result in low detection accuracy. To alleviate this problem, we propose a graph-based method for unsupervised log anomaly detection, dubbed Logs2Graphs, which first converts event logs into attributed, directed, and weighted graphs, and then leverages graph neural networks to perform graph-level anomaly detection. Specifically, we introduce One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph neural network model for detecting graph-level anomalies in a collection of attributed, directed, and weighted graphs. By coupling the graph representation and anomaly detection steps, OCDiGCN can learn a representation that is especially suited for anomaly detection, resulting in a high detection accuracy. Importantly, for each identified anomaly, we additionally provide a small subset of nodes that play a crucial role in OCDiGCN's prediction as explanations, which can offer valuable cues for subsequent root cause diagnosis. Experiments on five benchmark datasets show that Logs2Graphs performs at least on par with state-of-the-art log anomaly detection methods on simple datasets while largely outperforming state-of-the-art log anomaly detection methods on complicated datasets.
|
2110.12425
|
Jiashuo Liu
|
Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
|
Kernelized Heterogeneous Risk Minimization
|
35th Conference on Neural Information Processing Systems (NeurIPS
2021), Sydney, Australia. arXiv admin note: text overlap with
arXiv:2105.03818
| null | null |
17
|
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to generalize under distributional shifts is essential to
reliable machine learning, while models optimized with empirical risk
minimization usually fail on non-$i.i.d$ testing data. Recently, invariant
learning methods for out-of-distribution (OOD) generalization propose to find
causally invariant relationships with multi-environments. However, modern
datasets are frequently multi-sourced without explicit source labels, rendering
many invariant learning methods inapplicable. In this paper, we propose
Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves
both the latent heterogeneity exploration and invariant learning in kernel
space, and then gives feedback to the original neural network by appointing
invariant gradient direction. We theoretically justify our algorithm and
empirically validate the effectiveness of our algorithm with extensive
experiments.
|
[
{
"created": "Sun, 24 Oct 2021 12:26:50 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"Liu",
"Jiashuo",
""
],
[
"Hu",
"Zheyuan",
""
],
[
"Cui",
"Peng",
""
],
[
"Li",
"Bo",
""
],
[
"Shen",
"Zheyan",
""
]
] |
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i.i.d$ testing data. Recently, invariant learning methods for out-of-distribution (OOD) generalization propose to find causally invariant relationships with multi-environments. However, modern datasets are frequently multi-sourced without explicit source labels, rendering many invariant learning methods inapplicable. In this paper, we propose Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves both the latent heterogeneity exploration and invariant learning in kernel space, and then gives feedback to the original neural network by appointing invariant gradient direction. We theoretically justify our algorithm and empirically validate the effectiveness of our algorithm with extensive experiments.
|
2306.06578
|
Weizhe Chen
|
Weizhe Chen and Lantao Liu
|
Long-Term Autonomous Ocean Monitoring with Streaming Samples
|
Proceedings of OCEANS 2019, SEATTLE
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In the autonomous ocean monitoring task, the sampling robot moves in the
environment and accumulates data continuously. The widely adopted spatial
modeling method - standard Gaussian process (GP) regression - becomes
inadequate in processing the growing sensing data of a large size. To overcome
the computational challenge, this paper presents an environmental modeling
framework using a sparse variant of GP called streaming sparse GP (SSGP). The
SSGP is able to handle streaming data in an online and incremental manner, and
is therefore suitable for long-term autonomous environmental monitoring. The
SSGP summarizes the collected data using a small set of pseudo data points that
best represent the whole dataset, and updates the hyperparameters and pseudo
point locations in a streaming fashion, leading to high-quality approximation
of the underlying environmental model with significantly reduced computational
cost and memory demand.
|
[
{
"created": "Sun, 11 Jun 2023 03:59:26 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Chen",
"Weizhe",
""
],
[
"Liu",
"Lantao",
""
]
] |
In the autonomous ocean monitoring task, the sampling robot moves in the environment and accumulates data continuously. The widely adopted spatial modeling method - standard Gaussian process (GP) regression - becomes inadequate in processing the growing sensing data of a large size. To overcome the computational challenge, this paper presents an environmental modeling framework using a sparse variant of GP called streaming sparse GP (SSGP). The SSGP is able to handle streaming data in an online and incremental manner, and is therefore suitable for long-term autonomous environmental monitoring. The SSGP summarizes the collected data using a small set of pseudo data points that best represent the whole dataset, and updates the hyperparameters and pseudo point locations in a streaming fashion, leading to high-quality approximation of the underlying environmental model with significantly reduced computational cost and memory demand.
|
2406.10450
|
Haohao Qu
|
Haohao Qu, Wenqi Fan, Zihuai Zhao, Qing Li
|
TokenRec: Learning to Tokenize ID for LLM-based Generative
Recommendation
| null | null | null | null |
cs.IR cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a growing interest in utilizing large-scale language models (LLMs)
to advance next-generation Recommender Systems (RecSys), driven by their
outstanding language understanding and in-context learning capabilities. In
this scenario, tokenizing (i.e., indexing) users and items becomes essential
for ensuring a seamless alignment of LLMs with recommendations. While several
studies have made progress in representing users and items through textual
contents or latent representations, challenges remain in efficiently capturing
high-order collaborative knowledge into discrete tokens that are compatible
with LLMs. Additionally, the majority of existing tokenization approaches often
face difficulties in generalizing effectively to new/unseen users or items that
were not in the training corpus. To address these challenges, we propose a
novel framework called TokenRec, which introduces not only an effective ID
tokenization strategy but also an efficient retrieval paradigm for LLM-based
recommendations. Specifically, our tokenization strategy, Masked
Vector-Quantized (MQ) Tokenizer, involves quantizing the masked user/item
representations learned from collaborative filtering into discrete tokens, thus
achieving a smooth incorporation of high-order collaborative knowledge and a
generalizable tokenization of users and items for LLM-based RecSys. Meanwhile,
our generative retrieval paradigm is designed to efficiently recommend top-$K$
items for users to eliminate the need for the time-consuming auto-regressive
decoding and beam search processes used by LLMs, thus significantly reducing
inference time. Comprehensive experiments validate the effectiveness of the
proposed methods, demonstrating that TokenRec outperforms competitive
benchmarks, including both traditional recommender systems and emerging
LLM-based recommender systems.
|
[
{
"created": "Sat, 15 Jun 2024 00:07:44 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Qu",
"Haohao",
""
],
[
"Fan",
"Wenqi",
""
],
[
"Zhao",
"Zihuai",
""
],
[
"Li",
"Qing",
""
]
] |
There is a growing interest in utilizing large-scale language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and in-context learning capabilities. In this scenario, tokenizing (i.e., indexing) users and items becomes essential for ensuring a seamless alignment of LLMs with recommendations. While several studies have made progress in representing users and items through textual contents or latent representations, challenges remain in efficiently capturing high-order collaborative knowledge into discrete tokens that are compatible with LLMs. Additionally, the majority of existing tokenization approaches often face difficulties in generalizing effectively to new/unseen users or items that were not in the training corpus. To address these challenges, we propose a novel framework called TokenRec, which introduces not only an effective ID tokenization strategy but also an efficient retrieval paradigm for LLM-based recommendations. Specifically, our tokenization strategy, Masked Vector-Quantized (MQ) Tokenizer, involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving a smooth incorporation of high-order collaborative knowledge and a generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.
|
2004.14107
|
He Wang
|
Feixiang He, Yuanhang Xiang, Xi Zhao, He Wang
|
Informative Scene Decomposition for Crowd Analysis, Comparison and
Simulation Guidance
|
accepted in SIGGRAPH 2020
| null | null | null |
cs.GR cs.CV cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowd simulation is a central topic in several fields including graphics. To
achieve high-fidelity simulations, data has been increasingly relied upon for
analysis and simulation guidance. However, the information in real-world data
is often noisy, mixed and unstructured, making it difficult for effective
analysis, therefore has not been fully utilized. With the fast-growing volume
of crowd data, such a bottleneck needs to be addressed. In this paper, we
propose a new framework which comprehensively tackles this problem. It centers
at an unsupervised method for analysis. The method takes as input raw and noisy
data with highly mixed multi-dimensional (space, time and dynamics)
information, and automatically structure it by learning the correlations among
these dimensions. The dimensions together with their correlations fully
describe the scene semantics which consists of recurring activity patterns in a
scene, manifested as space flows with temporal and dynamics profiles. The
effectiveness and robustness of the analysis have been tested on datasets with
great variations in volume, duration, environment and crowd dynamics. Based on
the analysis, new methods for data visualization, simulation evaluation and
simulation guidance are also proposed. Together, our framework establishes a
highly automated pipeline from raw data to crowd analysis, comparison and
simulation guidance. Extensive experiments and evaluations have been conducted
to show the flexibility, versatility and intuitiveness of our framework.
|
[
{
"created": "Wed, 29 Apr 2020 12:03:32 GMT",
"version": "v1"
}
] |
2020-04-30
|
[
[
"He",
"Feixiang",
""
],
[
"Xiang",
"Yuanhang",
""
],
[
"Zhao",
"Xi",
""
],
[
"Wang",
"He",
""
]
] |
Crowd simulation is a central topic in several fields including graphics. To achieve high-fidelity simulations, data has been increasingly relied upon for analysis and simulation guidance. However, the information in real-world data is often noisy, mixed and unstructured, making it difficult for effective analysis, therefore has not been fully utilized. With the fast-growing volume of crowd data, such a bottleneck needs to be addressed. In this paper, we propose a new framework which comprehensively tackles this problem. It centers at an unsupervised method for analysis. The method takes as input raw and noisy data with highly mixed multi-dimensional (space, time and dynamics) information, and automatically structure it by learning the correlations among these dimensions. The dimensions together with their correlations fully describe the scene semantics which consists of recurring activity patterns in a scene, manifested as space flows with temporal and dynamics profiles. The effectiveness and robustness of the analysis have been tested on datasets with great variations in volume, duration, environment and crowd dynamics. Based on the analysis, new methods for data visualization, simulation evaluation and simulation guidance are also proposed. Together, our framework establishes a highly automated pipeline from raw data to crowd analysis, comparison and simulation guidance. Extensive experiments and evaluations have been conducted to show the flexibility, versatility and intuitiveness of our framework.
|
2011.06922
|
Yoav Shalev
|
Yoav Shalev, Lior Wolf
|
Image Animation with Perturbed Masks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel approach for image-animation of a source image by a
driving video, both depicting the same type of object. We do not assume the
existence of pose models and our method is able to animate arbitrary objects
without the knowledge of the object's structure. Furthermore, both, the driving
video and the source image are only seen during test-time. Our method is based
on a shared mask generator, which separates the foreground object from its
background, and captures the object's general pose and shape. To control the
source of the identity of the output frame, we employ perturbations to
interrupt the unwanted identity information on the driver's mask. A
mask-refinement module then replaces the identity of the driver with the
identity of the source. Conditioned on the source image, the transformed mask
is then decoded by a multi-scale generator that renders a realistic image, in
which the content of the source frame is animated by the pose in the driving
video. Due to the lack of fully supervised data, we train on the task of
reconstructing frames from the same video the source image is taken from. Our
method is shown to greatly outperform the state-of-the-art methods on multiple
benchmarks. Our code and samples are available at
https://github.com/itsyoavshalev/Image-Animation-with-Perturbed-Masks.
|
[
{
"created": "Fri, 13 Nov 2020 14:17:17 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 19:23:52 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Mar 2022 09:30:26 GMT",
"version": "v3"
}
] |
2022-03-30
|
[
[
"Shalev",
"Yoav",
""
],
[
"Wolf",
"Lior",
""
]
] |
We present a novel approach for image-animation of a source image by a driving video, both depicting the same type of object. We do not assume the existence of pose models and our method is able to animate arbitrary objects without the knowledge of the object's structure. Furthermore, both, the driving video and the source image are only seen during test-time. Our method is based on a shared mask generator, which separates the foreground object from its background, and captures the object's general pose and shape. To control the source of the identity of the output frame, we employ perturbations to interrupt the unwanted identity information on the driver's mask. A mask-refinement module then replaces the identity of the driver with the identity of the source. Conditioned on the source image, the transformed mask is then decoded by a multi-scale generator that renders a realistic image, in which the content of the source frame is animated by the pose in the driving video. Due to the lack of fully supervised data, we train on the task of reconstructing frames from the same video the source image is taken from. Our method is shown to greatly outperform the state-of-the-art methods on multiple benchmarks. Our code and samples are available at https://github.com/itsyoavshalev/Image-Animation-with-Perturbed-Masks.
|
2208.04980
|
Christopher Perez
|
Christopher Perez, Sayar Karmakar
|
An NLP-Assisted Bayesian Time Series Analysis for Prevalence of Twitter
Cyberbullying During the COVID-19 Pandemic
|
22 pages, 15 figures
| null | null | null |
cs.SI cs.LG stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
COVID-19 has brought about many changes in social dynamics. Stay-at-home
orders and disruptions in school teaching can influence bullying behavior
in-person and online, both of which leading to negative outcomes in victims. To
study cyberbullying specifically, 1 million tweets containing keywords
associated with abuse were collected from the beginning of 2019 to the end of
2021 with the Twitter API search endpoint. A natural language processing model
pre-trained on a Twitter corpus generated probabilities for the tweets being
offensive and hateful. To overcome limitations of sampling, data was also
collected using the count endpoint. The fraction of tweets from a given daily
sample marked as abusive is multiplied to the number reported by the count
endpoint. Once these adjusted counts are assembled, a Bayesian autoregressive
Poisson model allows one to study the mean trend and lag functions of the data
and how they vary over time. The results reveal strong weekly and yearly
seasonality in hateful speech but with slight differences across years that may
be attributed to COVID-19.
|
[
{
"created": "Sat, 23 Jul 2022 15:24:07 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Mar 2023 01:02:23 GMT",
"version": "v2"
}
] |
2023-03-02
|
[
[
"Perez",
"Christopher",
""
],
[
"Karmakar",
"Sayar",
""
]
] |
COVID-19 has brought about many changes in social dynamics. Stay-at-home orders and disruptions in school teaching can influence bullying behavior in-person and online, both of which leading to negative outcomes in victims. To study cyberbullying specifically, 1 million tweets containing keywords associated with abuse were collected from the beginning of 2019 to the end of 2021 with the Twitter API search endpoint. A natural language processing model pre-trained on a Twitter corpus generated probabilities for the tweets being offensive and hateful. To overcome limitations of sampling, data was also collected using the count endpoint. The fraction of tweets from a given daily sample marked as abusive is multiplied to the number reported by the count endpoint. Once these adjusted counts are assembled, a Bayesian autoregressive Poisson model allows one to study the mean trend and lag functions of the data and how they vary over time. The results reveal strong weekly and yearly seasonality in hateful speech but with slight differences across years that may be attributed to COVID-19.
|
2306.07075
|
John Nay
|
John J. Nay, David Karamardian, Sarah B. Lawsky, Wenting Tao, Meghana
Bhat, Raghav Jain, Aaron Travis Lee, Jonathan H. Choi, Jungo Kasai
|
Large Language Models as Tax Attorneys: A Case Study in Legal
Capabilities Emergence
| null | null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Better understanding of Large Language Models' (LLMs) legal analysis
abilities can contribute to improving the efficiency of legal services,
governing artificial intelligence, and leveraging LLMs to identify
inconsistencies in law. This paper explores LLM capabilities in applying tax
law. We choose this area of law because it has a structure that allows us to
set up automated validation pipelines across thousands of examples, requires
logical reasoning and maths skills, and enables us to test LLM capabilities in
a manner relevant to real-world economic lives of citizens and companies. Our
experiments demonstrate emerging legal understanding capabilities, with
improved performance in each subsequent OpenAI model release. We experiment
with retrieving and utilising the relevant legal authority to assess the impact
of providing additional legal context to LLMs. Few-shot prompting, presenting
examples of question-answer pairs, is also found to significantly enhance the
performance of the most advanced model, GPT-4. The findings indicate that LLMs,
particularly when combined with prompting enhancements and the correct legal
texts, can perform at high levels of accuracy but not yet at expert tax lawyer
levels. As LLMs continue to advance, their ability to reason about law
autonomously could have significant implications for the legal profession and
AI governance.
|
[
{
"created": "Mon, 12 Jun 2023 12:40:48 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Nay",
"John J.",
""
],
[
"Karamardian",
"David",
""
],
[
"Lawsky",
"Sarah B.",
""
],
[
"Tao",
"Wenting",
""
],
[
"Bhat",
"Meghana",
""
],
[
"Jain",
"Raghav",
""
],
[
"Lee",
"Aaron Travis",
""
],
[
"Choi",
"Jonathan H.",
""
],
[
"Kasai",
"Jungo",
""
]
] |
Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.
|
1911.10038
|
Matej Ul\v{c}ar
|
Matej Ul\v{c}ar, Kristiina Vaik, Jessica Lindstr\"om, Milda
Dailid\.enait\.e, Marko Robnik-\v{S}ikonja
|
Multilingual Culture-Independent Word Analogy Datasets
|
7 pages, LREC2020 conference
|
Proceedings of the 12th Conference on Language Resources and
Evaluation (LREC 2020), pages 4074-4080
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In text processing, deep neural networks mostly use word embeddings as an
input. Embeddings have to ensure that relations between words are reflected
through distances in a high-dimensional numeric space. To compare the quality
of different text embeddings, typically, we use benchmark datasets. We present
a collection of such datasets for the word analogy task in nine languages:
Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian,
and Swedish. We redesigned the original monolingual analogy task to be much
more culturally independent and also constructed cross-lingual analogy datasets
for the involved languages. We present basic statistics of the created datasets
and their initial evaluation using fastText embeddings.
|
[
{
"created": "Fri, 22 Nov 2019 13:39:06 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Mar 2020 15:32:16 GMT",
"version": "v2"
}
] |
2022-06-01
|
[
[
"Ulčar",
"Matej",
""
],
[
"Vaik",
"Kristiina",
""
],
[
"Lindström",
"Jessica",
""
],
[
"Dailidėnaitė",
"Milda",
""
],
[
"Robnik-Šikonja",
"Marko",
""
]
] |
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We redesigned the original monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
|
2001.06452
|
Jingxuan Huang
|
Jingxuan Huang, Zesong Fei, Congzhe Cao, and Ming Xiao
|
Design and Analysis of Online Fountain Codes for Intermediate
Performance
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the benefit of improved intermediate performance, recently online
fountain codes attract much research attention. However, there is a trade-off
between the intermediate performance and the full recovery overhead for online
fountain codes, which prevents them to be improved simultaneously. We analyze
this trade-off, and propose to improve both of these two performance. We first
propose a method called Online Fountain Codes without Build-up phase (OFCNB)
where the degree-1 coded symbols are transmitted at first and the build-up
phase is removed to improve the intermediate performance. Then we analyze the
performance of OFCNB theoretically. Motivated by the analysis results, we
propose Systematic Online Fountain Codes (SOFC) to further reduce the full
recovery overhead. Theoretical analysis shows that SOFC has better intermediate
performance, and it also requires lower full recovery overhead when the channel
erasure rate is lower than a constant. Simulation results verify the analyses
and demonstrate the superior performance of OFCNB and SOFC in comparison to
other online fountain codes.
|
[
{
"created": "Fri, 17 Jan 2020 17:52:55 GMT",
"version": "v1"
}
] |
2020-01-20
|
[
[
"Huang",
"Jingxuan",
""
],
[
"Fei",
"Zesong",
""
],
[
"Cao",
"Congzhe",
""
],
[
"Xiao",
"Ming",
""
]
] |
For the benefit of improved intermediate performance, recently online fountain codes attract much research attention. However, there is a trade-off between the intermediate performance and the full recovery overhead for online fountain codes, which prevents them to be improved simultaneously. We analyze this trade-off, and propose to improve both of these two performance. We first propose a method called Online Fountain Codes without Build-up phase (OFCNB) where the degree-1 coded symbols are transmitted at first and the build-up phase is removed to improve the intermediate performance. Then we analyze the performance of OFCNB theoretically. Motivated by the analysis results, we propose Systematic Online Fountain Codes (SOFC) to further reduce the full recovery overhead. Theoretical analysis shows that SOFC has better intermediate performance, and it also requires lower full recovery overhead when the channel erasure rate is lower than a constant. Simulation results verify the analyses and demonstrate the superior performance of OFCNB and SOFC in comparison to other online fountain codes.
|
1905.13662
|
Francesco Locatello
|
Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer,
Bernhard Sch\"olkopf, Olivier Bachem
|
On the Fairness of Disentangled Representations
| null |
NeurIPS 2019
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently there has been a significant interest in learning disentangled
representations, as they promise increased interpretability, generalization to
unseen scenarios and faster learning on downstream tasks. In this paper, we
investigate the usefulness of different notions of disentanglement for
improving the fairness of downstream prediction tasks based on representations.
We consider the setting where the goal is to predict a target variable based on
the learned representation of high-dimensional observations (such as images)
that depend on both the target variable and an \emph{unobserved} sensitive
variable. We show that in this setting both the optimal and empirical
predictions can be unfair, even if the target variable and the sensitive
variable are independent. Analyzing the representations of more than
\num{12600} trained state-of-the-art disentangled models, we observe that
several disentanglement scores are consistently correlated with increased
fairness, suggesting that disentanglement may be a useful property to encourage
fairness when sensitive variables are not observed.
|
[
{
"created": "Fri, 31 May 2019 15:03:12 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Oct 2019 10:56:08 GMT",
"version": "v2"
}
] |
2019-10-30
|
[
[
"Locatello",
"Francesco",
""
],
[
"Abbati",
"Gabriele",
""
],
[
"Rainforth",
"Tom",
""
],
[
"Bauer",
"Stefan",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Bachem",
"Olivier",
""
]
] |
Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks. In this paper, we investigate the usefulness of different notions of disentanglement for improving the fairness of downstream prediction tasks based on representations. We consider the setting where the goal is to predict a target variable based on the learned representation of high-dimensional observations (such as images) that depend on both the target variable and an \emph{unobserved} sensitive variable. We show that in this setting both the optimal and empirical predictions can be unfair, even if the target variable and the sensitive variable are independent. Analyzing the representations of more than \num{12600} trained state-of-the-art disentangled models, we observe that several disentanglement scores are consistently correlated with increased fairness, suggesting that disentanglement may be a useful property to encourage fairness when sensitive variables are not observed.
|
2303.13076
|
Xiaoshi Wu
|
Xiaoshi Wu, Feng Zhu, Rui Zhao, Hongsheng Li
|
CORA: Adapting CLIP for Open-Vocabulary Detection with Region Prompting
and Anchor Pre-Matching
|
11 pages, 4 figures. Accepted by CVPR 2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Open-vocabulary detection (OVD) is an object detection task aiming at
detecting objects from novel categories beyond the base categories on which the
detector is trained. Recent OVD methods rely on large-scale visual-language
pre-trained models, such as CLIP, for recognizing novel objects. We identify
the two core obstacles that need to be tackled when incorporating these models
into detector training: (1) the distribution mismatch that happens when
applying a VL-model trained on whole images to region recognition tasks; (2)
the difficulty of localizing objects of unseen classes. To overcome these
obstacles, we propose CORA, a DETR-style framework that adapts CLIP for
Open-vocabulary detection by Region prompting and Anchor pre-matching. Region
prompting mitigates the whole-to-region distribution gap by prompting the
region features of the CLIP-based region classifier. Anchor pre-matching helps
learning generalizable object localization by a class-aware matching mechanism.
We evaluate CORA on the COCO OVD benchmark, where we achieve 41.7 AP50 on novel
classes, which outperforms the previous SOTA by 2.4 AP50 even without resorting
to extra training data. When extra training data is available, we train
CORA$^+$ on both ground-truth base-category annotations and additional pseudo
bounding box labels computed by CORA. CORA$^+$ achieves 43.1 AP50 on the COCO
OVD benchmark and 28.1 box APr on the LVIS OVD benchmark.
|
[
{
"created": "Thu, 23 Mar 2023 07:13:57 GMT",
"version": "v1"
}
] |
2023-03-24
|
[
[
"Wu",
"Xiaoshi",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Li",
"Hongsheng",
""
]
] |
Open-vocabulary detection (OVD) is an object detection task aiming at detecting objects from novel categories beyond the base categories on which the detector is trained. Recent OVD methods rely on large-scale visual-language pre-trained models, such as CLIP, for recognizing novel objects. We identify the two core obstacles that need to be tackled when incorporating these models into detector training: (1) the distribution mismatch that happens when applying a VL-model trained on whole images to region recognition tasks; (2) the difficulty of localizing objects of unseen classes. To overcome these obstacles, we propose CORA, a DETR-style framework that adapts CLIP for Open-vocabulary detection by Region prompting and Anchor pre-matching. Region prompting mitigates the whole-to-region distribution gap by prompting the region features of the CLIP-based region classifier. Anchor pre-matching helps learning generalizable object localization by a class-aware matching mechanism. We evaluate CORA on the COCO OVD benchmark, where we achieve 41.7 AP50 on novel classes, which outperforms the previous SOTA by 2.4 AP50 even without resorting to extra training data. When extra training data is available, we train CORA$^+$ on both ground-truth base-category annotations and additional pseudo bounding box labels computed by CORA. CORA$^+$ achieves 43.1 AP50 on the COCO OVD benchmark and 28.1 box APr on the LVIS OVD benchmark.
|
1904.07965
|
Fabrizio Sebastiani
|
Andrea Esuli, Alejandro Moreo, Fabrizio Sebastiani
|
Cross-Lingual Sentiment Quantification
|
Identical to previous version, but for the abstract, which is now
identical to the one in the published version
|
Final version published in IEEE Intelligent Systems 35(3):106-114,
2020
|
10.1109/MIS.2020.2979203
| null |
cs.LG cs.IR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
\emph{Sentiment Quantification} (i.e., the task of estimating the relative
frequency of sentiment-related classes -- such as \textsf{Positive} and
\textsf{Negative} -- in a set of unlabelled documents) is an important topic in
sentiment analysis, as the study of sentiment-related quantities and trends
across a population is often of higher interest than the analysis of individual
instances. In this work we propose a method for \emph{Cross-Lingual Sentiment
Quantification}, the task of performing sentiment quantification when training
documents are available for a source language $\mathcal{S}$ but not for the
target language $\mathcal{T}$ for which sentiment quantification needs to be
performed. Cross-lingual sentiment quantification (and cross-lingual
\emph{text} quantification in general) has never been discussed before in the
literature; we establish baseline results for the binary case by combining
state-of-the-art quantification methods with methods capable of generating
cross-lingual vectorial representations of the source and target documents
involved. We present experimental results obtained on publicly available
datasets for cross-lingual sentiment classification; the results show that the
presented methods can perform cross-lingual sentiment quantification with a
surprising level of accuracy.
|
[
{
"created": "Tue, 16 Apr 2019 20:32:02 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jul 2020 13:50:58 GMT",
"version": "v2"
}
] |
2021-09-22
|
[
[
"Esuli",
"Andrea",
""
],
[
"Moreo",
"Alejandro",
""
],
[
"Sebastiani",
"Fabrizio",
""
]
] |
\emph{Sentiment Quantification} (i.e., the task of estimating the relative frequency of sentiment-related classes -- such as \textsf{Positive} and \textsf{Negative} -- in a set of unlabelled documents) is an important topic in sentiment analysis, as the study of sentiment-related quantities and trends across a population is often of higher interest than the analysis of individual instances. In this work we propose a method for \emph{Cross-Lingual Sentiment Quantification}, the task of performing sentiment quantification when training documents are available for a source language $\mathcal{S}$ but not for the target language $\mathcal{T}$ for which sentiment quantification needs to be performed. Cross-lingual sentiment quantification (and cross-lingual \emph{text} quantification in general) has never been discussed before in the literature; we establish baseline results for the binary case by combining state-of-the-art quantification methods with methods capable of generating cross-lingual vectorial representations of the source and target documents involved. We present experimental results obtained on publicly available datasets for cross-lingual sentiment classification; the results show that the presented methods can perform cross-lingual sentiment quantification with a surprising level of accuracy.
|
1511.06468
|
Di Wang
|
Di Wang, Michael Mahoney, Nishanth Mohan, Satish Rao
|
Faster Parallel Solver for Positive Linear Programs via
Dynamically-Bucketed Selective Coordinate Descent
| null | null | null | null |
cs.DS cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide improved parallel approximation algorithms for the important class
of packing and covering linear programs. In particular, we present new parallel
$\epsilon$-approximate packing and covering solvers which run in
$\tilde{O}(1/\epsilon^2)$ expected time, i.e., in expectation they take
$\tilde{O}(1/\epsilon^2)$ iterations and they do $\tilde{O}(N/\epsilon^2)$
total work, where $N$ is the size of the constraint matrix and $\epsilon$ is
the error parameter, and where the $\tilde{O}$ hides logarithmic factors. To
achieve our improvement, we introduce an algorithmic technique of broader
interest: dynamically-bucketed selective coordinate descent (DB-SCD). At each
step of the iterative optimization algorithm, the DB-SCD method dynamically
buckets the coordinates of the gradient into those of roughly equal magnitude,
and it updates all the coordinates in one of the buckets. This
dynamically-bucketed updating permits us to take steps along several
coordinates with similar-sized gradients, thereby permitting more appropriate
step sizes at each step of the algorithm. In particular, this technique allows
us to use in a straightforward manner the recent analysis from the breakthrough
results of Allen-Zhu and Orecchia [2] to achieve our still-further improved
bounds. More generally, this method addresses "interference" among coordinates,
by which we mean the impact of the update of one coordinate on the gradients of
other coordinates. Such interference is a core issue in parallelizing
optimization routines that rely on smoothness properties. Since our DB-SCD
method reduces interference via updating a selective subset of variables at
each iteration, we expect it may also have more general applicability in
optimization.
|
[
{
"created": "Fri, 20 Nov 2015 01:10:13 GMT",
"version": "v1"
}
] |
2015-11-23
|
[
[
"Wang",
"Di",
""
],
[
"Mahoney",
"Michael",
""
],
[
"Mohan",
"Nishanth",
""
],
[
"Rao",
"Satish",
""
]
] |
We provide improved parallel approximation algorithms for the important class of packing and covering linear programs. In particular, we present new parallel $\epsilon$-approximate packing and covering solvers which run in $\tilde{O}(1/\epsilon^2)$ expected time, i.e., in expectation they take $\tilde{O}(1/\epsilon^2)$ iterations and they do $\tilde{O}(N/\epsilon^2)$ total work, where $N$ is the size of the constraint matrix and $\epsilon$ is the error parameter, and where the $\tilde{O}$ hides logarithmic factors. To achieve our improvement, we introduce an algorithmic technique of broader interest: dynamically-bucketed selective coordinate descent (DB-SCD). At each step of the iterative optimization algorithm, the DB-SCD method dynamically buckets the coordinates of the gradient into those of roughly equal magnitude, and it updates all the coordinates in one of the buckets. This dynamically-bucketed updating permits us to take steps along several coordinates with similar-sized gradients, thereby permitting more appropriate step sizes at each step of the algorithm. In particular, this technique allows us to use in a straightforward manner the recent analysis from the breakthrough results of Allen-Zhu and Orecchia [2] to achieve our still-further improved bounds. More generally, this method addresses "interference" among coordinates, by which we mean the impact of the update of one coordinate on the gradients of other coordinates. Such interference is a core issue in parallelizing optimization routines that rely on smoothness properties. Since our DB-SCD method reduces interference via updating a selective subset of variables at each iteration, we expect it may also have more general applicability in optimization.
|
2006.07228
|
Tao Sun
|
Mohammad Rasouli, Tao Sun, Ram Rajagopal
|
FedGAN: Federated Generative Adversarial Networks for Distributed Data
|
23 pages, 10 figures
| null | null | null |
cs.LG cs.CV cs.MA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Federated Generative Adversarial Network (FedGAN) for training a
GAN across distributed sources of non-independent-and-identically-distributed
data sources subject to communication and privacy constraints. Our algorithm
uses local generators and discriminators which are periodically synced via an
intermediary that averages and broadcasts the generator and discriminator
parameters. We theoretically prove the convergence of FedGAN with both equal
and two time-scale updates of generator and discriminator, under standard
assumptions, using stochastic approximations and communication efficient
stochastic gradient descents. We experiment FedGAN on toy examples (2D system,
mixed Gaussian, and Swiss role), image datasets (MNIST, CIFAR-10, and CelebA),
and time series datasets (household electricity consumption and electric
vehicle charging sessions). We show FedGAN converges and has similar
performance to general distributed GAN, while reduces communication complexity.
We also show its robustness to reduced communications.
|
[
{
"created": "Fri, 12 Jun 2020 14:36:43 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 06:38:12 GMT",
"version": "v2"
}
] |
2020-06-16
|
[
[
"Rasouli",
"Mohammad",
""
],
[
"Sun",
"Tao",
""
],
[
"Rajagopal",
"Ram",
""
]
] |
We propose Federated Generative Adversarial Network (FedGAN) for training a GAN across distributed sources of non-independent-and-identically-distributed data sources subject to communication and privacy constraints. Our algorithm uses local generators and discriminators which are periodically synced via an intermediary that averages and broadcasts the generator and discriminator parameters. We theoretically prove the convergence of FedGAN with both equal and two time-scale updates of generator and discriminator, under standard assumptions, using stochastic approximations and communication efficient stochastic gradient descents. We experiment FedGAN on toy examples (2D system, mixed Gaussian, and Swiss role), image datasets (MNIST, CIFAR-10, and CelebA), and time series datasets (household electricity consumption and electric vehicle charging sessions). We show FedGAN converges and has similar performance to general distributed GAN, while reduces communication complexity. We also show its robustness to reduced communications.
|
1410.6142
|
Mark Riedl
|
Mark O. Riedl
|
The Lovelace 2.0 Test of Artificial Creativity and Intelligence
|
2 pages
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Observing that the creation of certain types of artistic artifacts
necessitate intelligence, we present the Lovelace 2.0 Test of creativity as an
alternative to the Turing Test as a means of determining whether an agent is
intelligent. The Lovelace 2.0 Test builds off prior tests of creativity and
additionally provides a means of directly comparing the relative intelligence
of different agents.
|
[
{
"created": "Wed, 22 Oct 2014 18:59:31 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Oct 2014 15:09:53 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Dec 2014 03:24:06 GMT",
"version": "v3"
}
] |
2014-12-23
|
[
[
"Riedl",
"Mark O.",
""
]
] |
Observing that the creation of certain types of artistic artifacts necessitate intelligence, we present the Lovelace 2.0 Test of creativity as an alternative to the Turing Test as a means of determining whether an agent is intelligent. The Lovelace 2.0 Test builds off prior tests of creativity and additionally provides a means of directly comparing the relative intelligence of different agents.
|
2312.02445
|
Jiayi Liao
|
Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang
Wang, Xiangnan He
|
LLaRA: Large Language-Recommendation Assistant
|
11 pages, 5 figures
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential recommendation aims to predict users' next interaction with items
based on their past engagement sequence. Recently, the advent of Large Language
Models (LLMs) has sparked interest in leveraging them for sequential
recommendation, viewing it as language modeling. Previous studies represent
items within LLMs' input prompts as either ID indices or textual metadata.
However, these approaches often fail to either encapsulate comprehensive world
knowledge or exhibit sufficient behavioral understanding. To combine the
complementary strengths of conventional recommenders in capturing behavioral
patterns of users and LLMs in encoding world knowledge about items, we
introduce Large Language-Recommendation Assistant (LLaRA). Specifically, it
uses a novel hybrid prompting method that integrates ID-based item embeddings
learned by traditional recommendation models with textual item features.
Treating the "sequential behaviors of users" as a distinct modality beyond
texts, we employ a projector to align the traditional recommender's ID
embeddings with the LLM's input space. Moreover, rather than directly exposing
the hybrid prompt to LLMs, a curriculum learning strategy is adopted to
gradually ramp up training complexity. Initially, we warm up the LLM using
text-only prompts, which better suit its inherent language modeling ability.
Subsequently, we progressively transition to the hybrid prompts, training the
model to seamlessly incorporate the behavioral knowledge from the traditional
sequential recommender into the LLM. Empirical results validate the
effectiveness of our proposed framework. Codes are available at
https://github.com/ljy0ustc/LLaRA.
|
[
{
"created": "Tue, 5 Dec 2023 02:53:46 GMT",
"version": "v1"
},
{
"created": "Sun, 31 Dec 2023 05:09:01 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Apr 2024 09:53:04 GMT",
"version": "v3"
},
{
"created": "Sat, 4 May 2024 10:44:33 GMT",
"version": "v4"
}
] |
2024-05-07
|
[
[
"Liao",
"Jiayi",
""
],
[
"Li",
"Sihang",
""
],
[
"Yang",
"Zhengyi",
""
],
[
"Wu",
"Jiancan",
""
],
[
"Yuan",
"Yancheng",
""
],
[
"Wang",
"Xiang",
""
],
[
"He",
"Xiangnan",
""
]
] |
Sequential recommendation aims to predict users' next interaction with items based on their past engagement sequence. Recently, the advent of Large Language Models (LLMs) has sparked interest in leveraging them for sequential recommendation, viewing it as language modeling. Previous studies represent items within LLMs' input prompts as either ID indices or textual metadata. However, these approaches often fail to either encapsulate comprehensive world knowledge or exhibit sufficient behavioral understanding. To combine the complementary strengths of conventional recommenders in capturing behavioral patterns of users and LLMs in encoding world knowledge about items, we introduce Large Language-Recommendation Assistant (LLaRA). Specifically, it uses a novel hybrid prompting method that integrates ID-based item embeddings learned by traditional recommendation models with textual item features. Treating the "sequential behaviors of users" as a distinct modality beyond texts, we employ a projector to align the traditional recommender's ID embeddings with the LLM's input space. Moreover, rather than directly exposing the hybrid prompt to LLMs, a curriculum learning strategy is adopted to gradually ramp up training complexity. Initially, we warm up the LLM using text-only prompts, which better suit its inherent language modeling ability. Subsequently, we progressively transition to the hybrid prompts, training the model to seamlessly incorporate the behavioral knowledge from the traditional sequential recommender into the LLM. Empirical results validate the effectiveness of our proposed framework. Codes are available at https://github.com/ljy0ustc/LLaRA.
|
2203.16954
|
Wenlin Dai
|
Wenlin Dai, Changhe Song, Xiang Li, Zhiyong Wu, Huashan Pan, Xiulin
Li, Helen Meng
|
An End-to-end Chinese Text Normalization Model based on Rule-guided
Flat-Lattice Transformer
|
Accepted by ICASSP 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text normalization, defined as a procedure transforming non standard words to
spoken-form words, is crucial to the intelligibility of synthesized speech in
text-to-speech system. Rule-based methods without considering context can not
eliminate ambiguation, whereas sequence-to-sequence neural network based
methods suffer from the unexpected and uninterpretable errors problem. Recently
proposed hybrid system treats rule-based model and neural model as two cascaded
sub-modules, where limited interaction capability makes neural network model
cannot fully utilize expert knowledge contained in the rules. Inspired by
Flat-LAttice Transformer (FLAT), we propose an end-to-end Chinese text
normalization model, which accepts Chinese characters as direct input and
integrates expert knowledge contained in rules into the neural network, both
contribute to the superior performance of proposed model for the text
normalization task. We also release a first publicly accessible largescale
dataset for Chinese text normalization. Our proposed model has achieved
excellent results on this dataset.
|
[
{
"created": "Thu, 31 Mar 2022 11:19:53 GMT",
"version": "v1"
}
] |
2022-04-01
|
[
[
"Dai",
"Wenlin",
""
],
[
"Song",
"Changhe",
""
],
[
"Li",
"Xiang",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Pan",
"Huashan",
""
],
[
"Li",
"Xiulin",
""
],
[
"Meng",
"Helen",
""
]
] |
Text normalization, defined as a procedure transforming non standard words to spoken-form words, is crucial to the intelligibility of synthesized speech in text-to-speech system. Rule-based methods without considering context can not eliminate ambiguation, whereas sequence-to-sequence neural network based methods suffer from the unexpected and uninterpretable errors problem. Recently proposed hybrid system treats rule-based model and neural model as two cascaded sub-modules, where limited interaction capability makes neural network model cannot fully utilize expert knowledge contained in the rules. Inspired by Flat-LAttice Transformer (FLAT), we propose an end-to-end Chinese text normalization model, which accepts Chinese characters as direct input and integrates expert knowledge contained in rules into the neural network, both contribute to the superior performance of proposed model for the text normalization task. We also release a first publicly accessible largescale dataset for Chinese text normalization. Our proposed model has achieved excellent results on this dataset.
|
2307.05827
|
Rohan Saha
|
Arif Shahriar, Rohan Saha, Denilson Barbosa
|
Relational Extraction on Wikipedia Tables using Convolutional and Memory
Networks
| null | null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Relation extraction (RE) is the task of extracting relations between entities
in text. Most RE methods extract relations from free-form running text and
leave out other rich data sources, such as tables. We explore RE from the
perspective of applying neural methods on tabularly organized data. We
introduce a new model consisting of Convolutional Neural Network (CNN) and
Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities and
learn dependencies among them, respectively. We evaluate our model on a large
and recent dataset and compare results with previous neural methods.
Experimental results show that our model consistently outperforms the previous
model for the task of relation extraction on tabular data. We perform
comprehensive error analyses and ablation study to show the contribution of
various components of our model. Finally, we discuss the usefulness and
trade-offs of our approach, and provide suggestions for fostering further
research.
|
[
{
"created": "Tue, 11 Jul 2023 22:36:47 GMT",
"version": "v1"
}
] |
2023-07-13
|
[
[
"Shahriar",
"Arif",
""
],
[
"Saha",
"Rohan",
""
],
[
"Barbosa",
"Denilson",
""
]
] |
Relation extraction (RE) is the task of extracting relations between entities in text. Most RE methods extract relations from free-form running text and leave out other rich data sources, such as tables. We explore RE from the perspective of applying neural methods on tabularly organized data. We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities and learn dependencies among them, respectively. We evaluate our model on a large and recent dataset and compare results with previous neural methods. Experimental results show that our model consistently outperforms the previous model for the task of relation extraction on tabular data. We perform comprehensive error analyses and ablation study to show the contribution of various components of our model. Finally, we discuss the usefulness and trade-offs of our approach, and provide suggestions for fostering further research.
|
2403.16265
|
Zhuoyi Peng
|
Zhuoyi Peng, Yi Yang
|
Connecting the Dots: Inferring Patent Phrase Similarity with Retrieved
Phrase Graphs
|
Findings of NAACL 2024
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the patent phrase similarity inference task, which measures the
semantic similarity between two patent phrases. As patent documents employ
legal and highly technical language, existing semantic textual similarity
methods that use localized contextual information do not perform satisfactorily
in inferring patent phrase similarity. To address this, we introduce a
graph-augmented approach to amplify the global contextual information of the
patent phrases. For each patent phrase, we construct a phrase graph that links
to its focal patents and a list of patents that are either cited by or cite
these focal patents. The augmented phrase embedding is then derived from
combining its localized contextual embedding with its global embedding within
the phrase graph. We further propose a self-supervised learning objective that
capitalizes on the retrieved topology to refine both the contextualized
embedding and the graph parameters in an end-to-end manner. Experimental
results from a unique patent phrase similarity dataset demonstrate that our
approach significantly enhances the representation of patent phrases, resulting
in marked improvements in similarity inference in a self-supervised fashion.
Substantial improvements are also observed in the supervised setting,
underscoring the potential benefits of leveraging retrieved phrase graph
augmentation.
|
[
{
"created": "Sun, 24 Mar 2024 18:59:38 GMT",
"version": "v1"
}
] |
2024-03-26
|
[
[
"Peng",
"Zhuoyi",
""
],
[
"Yang",
"Yi",
""
]
] |
We study the patent phrase similarity inference task, which measures the semantic similarity between two patent phrases. As patent documents employ legal and highly technical language, existing semantic textual similarity methods that use localized contextual information do not perform satisfactorily in inferring patent phrase similarity. To address this, we introduce a graph-augmented approach to amplify the global contextual information of the patent phrases. For each patent phrase, we construct a phrase graph that links to its focal patents and a list of patents that are either cited by or cite these focal patents. The augmented phrase embedding is then derived from combining its localized contextual embedding with its global embedding within the phrase graph. We further propose a self-supervised learning objective that capitalizes on the retrieved topology to refine both the contextualized embedding and the graph parameters in an end-to-end manner. Experimental results from a unique patent phrase similarity dataset demonstrate that our approach significantly enhances the representation of patent phrases, resulting in marked improvements in similarity inference in a self-supervised fashion. Substantial improvements are also observed in the supervised setting, underscoring the potential benefits of leveraging retrieved phrase graph augmentation.
|
2210.12373
|
Caesar Wu
|
Caesar Wu, Kotagiri Ramamohanarao, Rui Zhang, Pascal Bouvry
|
Strategic Decisions Survey, Taxonomy, and Future Directions from
Artificial Intelligence Perspective
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Strategic Decision-Making is always challenging because it is inherently
uncertain, ambiguous, risky, and complex. It is the art of possibility. We
develop a systematic taxonomy of decision-making frames that consists of 6
bases, 18 categorical, and 54 frames. We aim to lay out the computational
foundation that is possible to capture a comprehensive landscape view of a
strategic problem. Compared with traditional models, it covers irrational,
non-rational and rational frames c dealing with certainty, uncertainty,
complexity, ambiguity, chaos, and ignorance.
|
[
{
"created": "Sat, 22 Oct 2022 07:01:10 GMT",
"version": "v1"
}
] |
2022-10-25
|
[
[
"Wu",
"Caesar",
""
],
[
"Ramamohanarao",
"Kotagiri",
""
],
[
"Zhang",
"Rui",
""
],
[
"Bouvry",
"Pascal",
""
]
] |
Strategic Decision-Making is always challenging because it is inherently uncertain, ambiguous, risky, and complex. It is the art of possibility. We develop a systematic taxonomy of decision-making frames that consists of 6 bases, 18 categorical, and 54 frames. We aim to lay out the computational foundation that is possible to capture a comprehensive landscape view of a strategic problem. Compared with traditional models, it covers irrational, non-rational and rational frames c dealing with certainty, uncertainty, complexity, ambiguity, chaos, and ignorance.
|
2103.11642
|
Matthew R Behrend
|
Matthew R. Behrend and Sean M. Robinson
|
A Batch Normalization Classifier for Domain Adaptation
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Adapting a model to perform well on unforeseen data outside its training set
is a common problem that continues to motivate new approaches. We demonstrate
that application of batch normalization in the output layer, prior to softmax
activation, results in improved generalization across visual data domains in a
refined ResNet model. The approach adds negligible computational complexity yet
outperforms many domain adaptation methods that explicitly learn to align data
domains. We benchmark this technique on the Office-Home dataset and show that
batch normalization is competitive with other leading methods. We show that
this method is not sensitive to presence of source data during adaptation and
further present the impact on trained tensor distributions tends toward
sparsity. Code is available at https://github.com/matthewbehrend/BNC
|
[
{
"created": "Mon, 22 Mar 2021 08:03:44 GMT",
"version": "v1"
}
] |
2021-03-23
|
[
[
"Behrend",
"Matthew R.",
""
],
[
"Robinson",
"Sean M.",
""
]
] |
Adapting a model to perform well on unforeseen data outside its training set is a common problem that continues to motivate new approaches. We demonstrate that application of batch normalization in the output layer, prior to softmax activation, results in improved generalization across visual data domains in a refined ResNet model. The approach adds negligible computational complexity yet outperforms many domain adaptation methods that explicitly learn to align data domains. We benchmark this technique on the Office-Home dataset and show that batch normalization is competitive with other leading methods. We show that this method is not sensitive to presence of source data during adaptation and further present the impact on trained tensor distributions tends toward sparsity. Code is available at https://github.com/matthewbehrend/BNC
|
2301.04299
|
Maxwell Standen
|
Maxwell Standen, Junae Kim, Claudia Szabo
|
SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent
Reinforcement Learning
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial
Machine Learning (AML) attacks and needs adequate defences before it can be
used in real world applications. We have conducted a survey into the use of
execution-time AML attacks against MARL and the defences against those attacks.
We surveyed related work in the application of AML in Deep Reinforcement
Learning (DRL) and Multi-Agent Learning (MAL) to inform our analysis of AML for
MARL. We propose a novel perspective to understand the manner of perpetrating
an AML attack, by defining Attack Vectors. We develop two new frameworks to
address a gap in current modelling frameworks, focusing on the means and tempo
of an AML attack against MARL, and identify knowledge gaps and future avenues
of research.
|
[
{
"created": "Wed, 11 Jan 2023 04:25:00 GMT",
"version": "v1"
}
] |
2023-01-12
|
[
[
"Standen",
"Maxwell",
""
],
[
"Kim",
"Junae",
""
],
[
"Szabo",
"Claudia",
""
]
] |
Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial Machine Learning (AML) attacks and needs adequate defences before it can be used in real world applications. We have conducted a survey into the use of execution-time AML attacks against MARL and the defences against those attacks. We surveyed related work in the application of AML in Deep Reinforcement Learning (DRL) and Multi-Agent Learning (MAL) to inform our analysis of AML for MARL. We propose a novel perspective to understand the manner of perpetrating an AML attack, by defining Attack Vectors. We develop two new frameworks to address a gap in current modelling frameworks, focusing on the means and tempo of an AML attack against MARL, and identify knowledge gaps and future avenues of research.
|
1909.12104
|
Blai Bonet
|
Blai Bonet and Hector Geffner
|
Action Selection for MDPs: Anytime AO* vs. UCT
|
Proceedings AAAI-12
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the presence of non-admissible heuristics, A* and other best-first
algorithms can be converted into anytime optimal algorithms over OR graphs, by
simply continuing the search after the first solution is found. The same trick,
however, does not work for best-first algorithms over AND/OR graphs, that must
be able to expand leaf nodes of the explicit graph that are not necessarily
part of the best partial solution. Anytime optimal variants of AO* must thus
address an exploration-exploitation tradeoff: they cannot just "exploit", they
must keep exploring as well. In this work, we develop one such variant of AO*
and apply it to finite-horizon MDPs. This Anytime AO* algorithm eventually
delivers an optimal policy while using non-admissible random heuristics that
can be sampled, as when the heuristic is the cost of a base policy that can be
sampled with rollouts. We then test Anytime AO* for action selection over large
infinite-horizon MDPs that cannot be solved with existing off-line heuristic
search and dynamic programming algorithms, and compare it with UCT.
|
[
{
"created": "Thu, 26 Sep 2019 13:51:26 GMT",
"version": "v1"
}
] |
2019-09-27
|
[
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
In the presence of non-admissible heuristics, A* and other best-first algorithms can be converted into anytime optimal algorithms over OR graphs, by simply continuing the search after the first solution is found. The same trick, however, does not work for best-first algorithms over AND/OR graphs, that must be able to expand leaf nodes of the explicit graph that are not necessarily part of the best partial solution. Anytime optimal variants of AO* must thus address an exploration-exploitation tradeoff: they cannot just "exploit", they must keep exploring as well. In this work, we develop one such variant of AO* and apply it to finite-horizon MDPs. This Anytime AO* algorithm eventually delivers an optimal policy while using non-admissible random heuristics that can be sampled, as when the heuristic is the cost of a base policy that can be sampled with rollouts. We then test Anytime AO* for action selection over large infinite-horizon MDPs that cannot be solved with existing off-line heuristic search and dynamic programming algorithms, and compare it with UCT.
|
2402.12130
|
Piotr Dudek
|
Piotr Dudek
|
Factor Machine: Mixed-signal Architecture for Fine-Grained Graph-Based
Computing
|
An essay in contribution to the Festschrift for Professor Steve
Furber, Manchester, 12 January 2024
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper proposes the design and implementation strategy of a novel
computing architecture, the Factor Machine. The work is a step towards a
general-purpose parallel system operating in a non-sequential manner,
exploiting processing/memory co-integration and replacing the traditional
Turing/von Neumann model of a computer system with a framework based on
"factorised computation". This architecture is inspired by neural information
processing principles and aims to progress the development of brain-like
machine intelligence systems, through providing a computing substrate designed
from the ground up to enable efficient implementations of algorithms based on
relational networks. The paper provides a rationale for such machine, in the
context of the history of computing, and more recent developments in
neuromorphic hardware, reviews its general features, and proposes a
mixed-signal hardware implementation, based on using analogue circuits to carry
out computation and localised and sparse communication between the compute
units.
|
[
{
"created": "Mon, 19 Feb 2024 13:26:42 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 03:53:35 GMT",
"version": "v2"
}
] |
2024-02-21
|
[
[
"Dudek",
"Piotr",
""
]
] |
This paper proposes the design and implementation strategy of a novel computing architecture, the Factor Machine. The work is a step towards a general-purpose parallel system operating in a non-sequential manner, exploiting processing/memory co-integration and replacing the traditional Turing/von Neumann model of a computer system with a framework based on "factorised computation". This architecture is inspired by neural information processing principles and aims to progress the development of brain-like machine intelligence systems, through providing a computing substrate designed from the ground up to enable efficient implementations of algorithms based on relational networks. The paper provides a rationale for such machine, in the context of the history of computing, and more recent developments in neuromorphic hardware, reviews its general features, and proposes a mixed-signal hardware implementation, based on using analogue circuits to carry out computation and localised and sparse communication between the compute units.
|
1604.04137
|
Lin Zhang
|
Lin Zhang, Menglong Ye, Petros Giataganas, Michael Hughes and
Guang-Zhong Yang
|
Autonomous Scanning for Endomicroscopic Mosaicing and 3D Fusion
|
In submission at International Conference on Robotics and
Automation(ICRA) 2017
| null |
10.1109/ICRA.2017.7989412
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic-assisted Minimally Invasive Surgery (RMIS) can benefit from the
automation of common, repetitive or well-defined but ergonomically difficult
tasks. One such task is the scanning of a pick-up endomicroscopy probe over a
complex, undulating tissue surface in order to enhance the effective
field-of-view through video mosaicing. In this paper, the da Vinci surgical
robot, through the dVRK framework, is used for autonomous scanning and 2D
mosaicing over a user-defined region of interest. To achieve the level of
precision required for high quality large-area mosaic generation, which relies
on sufficient overlap between consecutive image frames, visual servoing is
performed using a tracking marker attached to the probe. The resulting
sub-millimetre accuracy of the probe motion allows for the generation of large
endomicroscopy mo- saics with minimal intervention from the surgeon. It also
allows the probe to be maintained in an orientation perpendicular to the local
tissue surface, providing optimal imaging results. Images are streamed from the
endomicroscope and overlaid live onto the surgeons view, while 2D mosaics are
generated in real-time, and fused into a 3D stereo reconstruction of the
surgical scene, thus providing intuitive visualisation and fusion of the
multi-scale images. The system therefore offers significant potential to
enhance surgical procedures, by providing the operator with cellular-scale
information over a larger area than could typically be achieved by manual
scanning.
|
[
{
"created": "Thu, 14 Apr 2016 12:50:03 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2016 21:07:54 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Oct 2016 08:55:21 GMT",
"version": "v3"
}
] |
2018-03-05
|
[
[
"Zhang",
"Lin",
""
],
[
"Ye",
"Menglong",
""
],
[
"Giataganas",
"Petros",
""
],
[
"Hughes",
"Michael",
""
],
[
"Yang",
"Guang-Zhong",
""
]
] |
Robotic-assisted Minimally Invasive Surgery (RMIS) can benefit from the automation of common, repetitive or well-defined but ergonomically difficult tasks. One such task is the scanning of a pick-up endomicroscopy probe over a complex, undulating tissue surface in order to enhance the effective field-of-view through video mosaicing. In this paper, the da Vinci surgical robot, through the dVRK framework, is used for autonomous scanning and 2D mosaicing over a user-defined region of interest. To achieve the level of precision required for high quality large-area mosaic generation, which relies on sufficient overlap between consecutive image frames, visual servoing is performed using a tracking marker attached to the probe. The resulting sub-millimetre accuracy of the probe motion allows for the generation of large endomicroscopy mo- saics with minimal intervention from the surgeon. It also allows the probe to be maintained in an orientation perpendicular to the local tissue surface, providing optimal imaging results. Images are streamed from the endomicroscope and overlaid live onto the surgeons view, while 2D mosaics are generated in real-time, and fused into a 3D stereo reconstruction of the surgical scene, thus providing intuitive visualisation and fusion of the multi-scale images. The system therefore offers significant potential to enhance surgical procedures, by providing the operator with cellular-scale information over a larger area than could typically be achieved by manual scanning.
|
2111.06278
|
Vladimir Gurvich
|
Vladimir Gurvich
|
On Nash-solvability of finite $n$-person deterministic graphical games;
Catch 22
|
4 pages
| null | null | null |
cs.GT math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We consider finite $n$-person deterministic graphical (DG) games. These games
are modelled by finite directed graphs (digraphs) $G$ which may have directed
cycles and, hence, infinite plays. Yet, it is assumed that all these plays are
equivalent and form a single outcome $c$, while the terminal vertices $V_T =
\{a_1, \ldots, a_p\}$ form $p$ remaining outcomes. We study the existence of
Nash equilibria (NE) in pure stationary strategies. It is known that NE exist
when $n=2$ and may fail to exist when $n > 2$. Yet, the question becomes open
for $n > 2$ under the following extra condition: (C) For each of $n$ players,
$c$ is worse than each of $p$ terminal outcomes. In other words, all players
are interested in terminating the play, which is a natural assumption.
Moreover, Nash-solvability remains open even if we replace (C) by a weaker
condition: (C22) There exist no two players for whom $c$ is better than (at
least) two terminal outcomes. We conjecture that such two players exist in each
NE-free DG game, or in other words, that (C22) implies Nash-solvability, for
all $n$. Recently, the DG games were extended to a wider class of the DG
multi-stage (DGMS) games, whose outcomes are the strongly connected components
(SCC) of digraph $G$. Merging all outcomes of a DGMS game that correspond to
its non-terminal SCCs we obtain a DG game. Clearly, this operation respects
Nash-solvability (NS). Basic conditions and conjectures related to NS can be
extended from the DG to DGMS games: in both cases NE exist if $n=2$ and may
fail to exist when $n > 2$; furthermore, we modify conditions (C) and (C22) to
adapt them for the DGMS games. Keywords: $n$-person deterministic graphical
(multi-stage) games, Nash equilibrium, Nash-solvability, pure stationary
strategy, digraph, directed cycle, strongly connected component.
|
[
{
"created": "Thu, 11 Nov 2021 15:37:51 GMT",
"version": "v1"
}
] |
2021-11-12
|
[
[
"Gurvich",
"Vladimir",
""
]
] |
We consider finite $n$-person deterministic graphical (DG) games. These games are modelled by finite directed graphs (digraphs) $G$ which may have directed cycles and, hence, infinite plays. Yet, it is assumed that all these plays are equivalent and form a single outcome $c$, while the terminal vertices $V_T = \{a_1, \ldots, a_p\}$ form $p$ remaining outcomes. We study the existence of Nash equilibria (NE) in pure stationary strategies. It is known that NE exist when $n=2$ and may fail to exist when $n > 2$. Yet, the question becomes open for $n > 2$ under the following extra condition: (C) For each of $n$ players, $c$ is worse than each of $p$ terminal outcomes. In other words, all players are interested in terminating the play, which is a natural assumption. Moreover, Nash-solvability remains open even if we replace (C) by a weaker condition: (C22) There exist no two players for whom $c$ is better than (at least) two terminal outcomes. We conjecture that such two players exist in each NE-free DG game, or in other words, that (C22) implies Nash-solvability, for all $n$. Recently, the DG games were extended to a wider class of the DG multi-stage (DGMS) games, whose outcomes are the strongly connected components (SCC) of digraph $G$. Merging all outcomes of a DGMS game that correspond to its non-terminal SCCs we obtain a DG game. Clearly, this operation respects Nash-solvability (NS). Basic conditions and conjectures related to NS can be extended from the DG to DGMS games: in both cases NE exist if $n=2$ and may fail to exist when $n > 2$; furthermore, we modify conditions (C) and (C22) to adapt them for the DGMS games. Keywords: $n$-person deterministic graphical (multi-stage) games, Nash equilibrium, Nash-solvability, pure stationary strategy, digraph, directed cycle, strongly connected component.
|
0803.4241
|
Sebastien Verel
|
Maroun Bercachi (I3S), Philippe Collard (I3S), Manuel Clergue (I3S),
S\'ebastien Verel (I3S)
|
Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic
Algorithms for Difficult Optimization Problems
| null |
Dans Proceedings of the IEEE Congress on Evolutionary Computation
CEC2007 - IEEE Congress on Evolutionary Computation CEC2007, singapore :
Singapour (2007)
| null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The application of genetic algorithms (GAs) to many optimization problems in
organizations often results in good performance and high quality solutions. For
successful and efficient use of GAs, it is not enough to simply apply simple
GAs (SGAs). In addition, it is necessary to find a proper representation for
the problem and to develop appropriate search operators that fit well to the
properties of the genotype encoding. The representation must at least be able
to encode all possible solutions of an optimization problem, and genetic
operators such as crossover and mutation should be applicable to it. In this
paper, serial alternation strategies between two codings are formulated in the
framework of dynamic change of genotype encoding in GAs for function
optimization. Likewise, a new variant of GAs for difficult optimization
problems denoted {\it Split-and-Merge} GA (SM-GA) is developed using a parallel
implementation of an SGA and evolving a dynamic exchange of individual
representation in the context of Dual Coding concept. Numerical experiments
show that the evolved SM-GA significantly outperforms an SGA with static single
coding.
|
[
{
"created": "Sat, 29 Mar 2008 07:51:18 GMT",
"version": "v1"
}
] |
2008-12-18
|
[
[
"Bercachi",
"Maroun",
"",
"I3S"
],
[
"Collard",
"Philippe",
"",
"I3S"
],
[
"Clergue",
"Manuel",
"",
"I3S"
],
[
"Verel",
"Sébastien",
"",
"I3S"
]
] |
The application of genetic algorithms (GAs) to many optimization problems in organizations often results in good performance and high quality solutions. For successful and efficient use of GAs, it is not enough to simply apply simple GAs (SGAs). In addition, it is necessary to find a proper representation for the problem and to develop appropriate search operators that fit well to the properties of the genotype encoding. The representation must at least be able to encode all possible solutions of an optimization problem, and genetic operators such as crossover and mutation should be applicable to it. In this paper, serial alternation strategies between two codings are formulated in the framework of dynamic change of genotype encoding in GAs for function optimization. Likewise, a new variant of GAs for difficult optimization problems denoted {\it Split-and-Merge} GA (SM-GA) is developed using a parallel implementation of an SGA and evolving a dynamic exchange of individual representation in the context of Dual Coding concept. Numerical experiments show that the evolved SM-GA significantly outperforms an SGA with static single coding.
|
2107.04711
|
Jan Smeddinck
|
Rosanna Bellini, Alexander Wilson, Jan David Smeddinck
|
Fragments of the Past: Curating Peer Support with Perpetrators of
Domestic Violence
| null | null |
10.1145/3411764.3445611
| null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is growing evidence that digital peer-support networks can have a
positive influence on behaviour change and wellbeing outcomes for people who
harm themselves and others. However, making and sustaining such networks are
subject to ethical and pragmatic challenges, particularly for perpetrators of
domestic violence whom pose unique risks when brought together. In this work we
report on a ten-month study where we worked with six support workers and
eighteen perpetrators in the design and deployment of Fragments of the Past; a
socio-material system that connects audio messages with tangible artefacts. We
share how crafting digitally-augmented artefacts - 'fragments' - of experiences
of desisting from violence can translate messages for motivation and rapport
between peers, without subjecting the process to risks inherent with direct
inter-personal communication. These insights provide the basis for practical
considerations for future network design with challenging populations.
|
[
{
"created": "Fri, 9 Jul 2021 22:57:43 GMT",
"version": "v1"
}
] |
2021-07-13
|
[
[
"Bellini",
"Rosanna",
""
],
[
"Wilson",
"Alexander",
""
],
[
"Smeddinck",
"Jan David",
""
]
] |
There is growing evidence that digital peer-support networks can have a positive influence on behaviour change and wellbeing outcomes for people who harm themselves and others. However, making and sustaining such networks are subject to ethical and pragmatic challenges, particularly for perpetrators of domestic violence whom pose unique risks when brought together. In this work we report on a ten-month study where we worked with six support workers and eighteen perpetrators in the design and deployment of Fragments of the Past; a socio-material system that connects audio messages with tangible artefacts. We share how crafting digitally-augmented artefacts - 'fragments' - of experiences of desisting from violence can translate messages for motivation and rapport between peers, without subjecting the process to risks inherent with direct inter-personal communication. These insights provide the basis for practical considerations for future network design with challenging populations.
|
2111.00600
|
Nur Lan
|
Nur Lan, Michal Geyer, Emmanuel Chemla, Roni Katzir
|
Minimum Description Length Recurrent Neural Networks
|
15 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We train neural networks to optimize a Minimum Description Length score,
i.e., to balance between the complexity of the network and its accuracy at a
task. We show that networks optimizing this objective function master tasks
involving memory challenges and go beyond context-free languages. These
learners master languages such as $a^nb^n$, $a^nb^nc^n$, $a^nb^{2n}$,
$a^nb^mc^{n+m}$, and they perform addition. Moreover, they often do so with
100% accuracy. The networks are small, and their inner workings are
transparent. We thus provide formal proofs that their perfect accuracy holds
not only on a given test set, but for any input sequence. To our knowledge, no
other connectionist model has been shown to capture the underlying grammars for
these languages in full generality.
|
[
{
"created": "Sun, 31 Oct 2021 21:43:31 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Mar 2022 16:38:35 GMT",
"version": "v2"
},
{
"created": "Wed, 30 Mar 2022 13:09:34 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Mar 2022 10:50:33 GMT",
"version": "v4"
}
] |
2022-04-01
|
[
[
"Lan",
"Nur",
""
],
[
"Geyer",
"Michal",
""
],
[
"Chemla",
"Emmanuel",
""
],
[
"Katzir",
"Roni",
""
]
] |
We train neural networks to optimize a Minimum Description Length score, i.e., to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as $a^nb^n$, $a^nb^nc^n$, $a^nb^{2n}$, $a^nb^mc^{n+m}$, and they perform addition. Moreover, they often do so with 100% accuracy. The networks are small, and their inner workings are transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality.
|
2312.03289
|
Seugnju Cho
|
Seungju Cho, Hongsin Lee, Changick Kim
|
Class Incremental Learning for Adversarial Robustness
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial training integrates adversarial examples during model training to
enhance robustness. However, its application in fixed dataset settings differs
from real-world dynamics, where data accumulates incrementally. In this study,
we investigate Adversarially Robust Class Incremental Learning (ARCIL), a
method that combines adversarial robustness with incremental learning. We
observe that combining incremental learning with naive adversarial training
easily leads to a loss of robustness. We discover that this is attributed to
the disappearance of the flatness of the loss function, a characteristic of
adversarial training. To address this issue, we propose the Flatness Preserving
Distillation (FPD) loss that leverages the output difference between
adversarial and clean examples. Additionally, we introduce the Logit Adjustment
Distillation (LAD) loss, which adapts the model's knowledge to perform well on
new tasks. Experimental results demonstrate the superiority of our method over
approaches that apply adversarial training to existing incremental learning
methods, which provides a strong baseline for incremental learning on
adversarial robustness in the future. Our method achieves AutoAttack accuracy
that is 5.99\%p, 5.27\%p, and 3.90\%p higher on average than the baseline on
split CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. The code will be
made available.
|
[
{
"created": "Wed, 6 Dec 2023 04:38:02 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Dec 2023 04:21:33 GMT",
"version": "v2"
}
] |
2023-12-08
|
[
[
"Cho",
"Seungju",
""
],
[
"Lee",
"Hongsin",
""
],
[
"Kim",
"Changick",
""
]
] |
Adversarial training integrates adversarial examples during model training to enhance robustness. However, its application in fixed dataset settings differs from real-world dynamics, where data accumulates incrementally. In this study, we investigate Adversarially Robust Class Incremental Learning (ARCIL), a method that combines adversarial robustness with incremental learning. We observe that combining incremental learning with naive adversarial training easily leads to a loss of robustness. We discover that this is attributed to the disappearance of the flatness of the loss function, a characteristic of adversarial training. To address this issue, we propose the Flatness Preserving Distillation (FPD) loss that leverages the output difference between adversarial and clean examples. Additionally, we introduce the Logit Adjustment Distillation (LAD) loss, which adapts the model's knowledge to perform well on new tasks. Experimental results demonstrate the superiority of our method over approaches that apply adversarial training to existing incremental learning methods, which provides a strong baseline for incremental learning on adversarial robustness in the future. Our method achieves AutoAttack accuracy that is 5.99\%p, 5.27\%p, and 3.90\%p higher on average than the baseline on split CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. The code will be made available.
|
2407.16898
|
Rub\'en Ruiz-Torrubiano
|
Andreas Krystallidis and Rub\'en Ruiz-Torrubiano
|
Introducing Individuality into Students' High School Timetables
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In a perfect world, each high school student could pursue their interests
through a personalized timetable that supports their strengths, weaknesses, and
curiosities. While recent research has shown that school systems are evolving
to support those developments by strengthening modularity in their curricula,
there is often a hurdle that prevents the complete success of such a system:
the scheduling process is too complex. While there are many tools that assist
with scheduling timetables in an effective way, they usually arrange students
into groups and classes with similar interests instead of handling each student
individually. In this paper, we propose an extension of the popular XHSTT
framework that adds two new constraints to model the individual student choices
as well as the requirements for group formation that arise from them. Those two
constraints were identified through extensive interviews with school
administrators and other school timetabling experts from six European
countries. We propose a corresponding ILP formulation and show first
optimization results for real-world instances from schools in Germany.
|
[
{
"created": "Wed, 19 Jun 2024 13:02:44 GMT",
"version": "v1"
}
] |
2024-07-25
|
[
[
"Krystallidis",
"Andreas",
""
],
[
"Ruiz-Torrubiano",
"Rubén",
""
]
] |
In a perfect world, each high school student could pursue their interests through a personalized timetable that supports their strengths, weaknesses, and curiosities. While recent research has shown that school systems are evolving to support those developments by strengthening modularity in their curricula, there is often a hurdle that prevents the complete success of such a system: the scheduling process is too complex. While there are many tools that assist with scheduling timetables in an effective way, they usually arrange students into groups and classes with similar interests instead of handling each student individually. In this paper, we propose an extension of the popular XHSTT framework that adds two new constraints to model the individual student choices as well as the requirements for group formation that arise from them. Those two constraints were identified through extensive interviews with school administrators and other school timetabling experts from six European countries. We propose a corresponding ILP formulation and show first optimization results for real-world instances from schools in Germany.
|
1908.00524
|
Michelle Valente
|
Michelle Valente, Cyril Joly and Arnaud de La Fortelle
|
Deep Sensor Fusion for Real-Time Odometry Estimation
|
arXiv admin note: substantial text overlap with arXiv:1902.08536
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cameras and 2D laser scanners, in combination, are able to provide low-cost,
light-weight and accurate solutions, which make their fusion well-suited for
many robot navigation tasks. However, correct data fusion depends on precise
calibration of the rigid body transform between the sensors. In this paper we
present the first framework that makes use of Convolutional Neural Networks
(CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The
use of CNNs provides the tools to not only extract the features from the two
sensors, but also to fuse and match them without needing a calibration between
the sensors. We transform the odometry estimation into an ordinal
classification problem in order to find accurate rotation and translation
values between consecutive frames. Results on a real road dataset show that the
fusion network runs in real-time and is able to improve the odometry estimation
of a single sensor alone by learning how to fuse two different types of data
information.
|
[
{
"created": "Wed, 31 Jul 2019 15:29:15 GMT",
"version": "v1"
}
] |
2019-08-02
|
[
[
"Valente",
"Michelle",
""
],
[
"Joly",
"Cyril",
""
],
[
"de La Fortelle",
"Arnaud",
""
]
] |
Cameras and 2D laser scanners, in combination, are able to provide low-cost, light-weight and accurate solutions, which make their fusion well-suited for many robot navigation tasks. However, correct data fusion depends on precise calibration of the rigid body transform between the sensors. In this paper we present the first framework that makes use of Convolutional Neural Networks (CNNs) for odometry estimation fusing 2D laser scanners and mono-cameras. The use of CNNs provides the tools to not only extract the features from the two sensors, but also to fuse and match them without needing a calibration between the sensors. We transform the odometry estimation into an ordinal classification problem in order to find accurate rotation and translation values between consecutive frames. Results on a real road dataset show that the fusion network runs in real-time and is able to improve the odometry estimation of a single sensor alone by learning how to fuse two different types of data information.
|
1102.2789
|
Johannes Mittmann
|
Malte Beecken, Johannes Mittmann and Nitin Saxena
|
Algebraic Independence and Blackbox Identity Testing
|
32 pages, preliminary version
| null | null | null |
cs.CC math.AC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algebraic independence is an advanced notion in commutative algebra that
generalizes independence of linear polynomials to higher degree. Polynomials
{f_1, ..., f_m} \subset \F[x_1, ..., x_n] are called algebraically independent
if there is no non-zero polynomial F such that F(f_1, ..., f_m) = 0. The
transcendence degree, trdeg{f_1, ..., f_m}, is the maximal number r of
algebraically independent polynomials in the set. In this paper we design
blackbox and efficient linear maps \phi that reduce the number of variables
from n to r but maintain trdeg{\phi(f_i)}_i = r, assuming f_i's sparse and
small r. We apply these fundamental maps to solve several cases of blackbox
identity testing:
(1) Given a polynomial-degree circuit C and sparse polynomials f_1, ..., f_m
with trdeg r, we can test blackbox D := C(f_1, ..., f_m) for zeroness in
poly(size(D))^r time.
(2) Define a spsp_\delta(k,s,n) circuit C to be of the form \sum_{i=1}^k
\prod_{j=1}^s f_{i,j}, where f_{i,j} are sparse n-variate polynomials of degree
at most \delta. For k = 2 we give a poly(sn\delta)^{\delta^2} time blackbox
identity test.
(3) For a general depth-4 circuit we define a notion of rank. Assuming there
is a rank bound R for minimal simple spsp_\delta(k,s,n) identities, we give a
poly(snR\delta)^{Rk\delta^2} time blackbox identity test for spsp_\delta(k,s,n)
circuits. This partially generalizes the state of the art of depth-3 to depth-4
circuits.
The notion of trdeg works best with large or zero characteristic, but we also
give versions of our results for arbitrary fields.
|
[
{
"created": "Mon, 14 Feb 2011 15:00:16 GMT",
"version": "v1"
}
] |
2011-02-15
|
[
[
"Beecken",
"Malte",
""
],
[
"Mittmann",
"Johannes",
""
],
[
"Saxena",
"Nitin",
""
]
] |
Algebraic independence is an advanced notion in commutative algebra that generalizes independence of linear polynomials to higher degree. Polynomials {f_1, ..., f_m} \subset \F[x_1, ..., x_n] are called algebraically independent if there is no non-zero polynomial F such that F(f_1, ..., f_m) = 0. The transcendence degree, trdeg{f_1, ..., f_m}, is the maximal number r of algebraically independent polynomials in the set. In this paper we design blackbox and efficient linear maps \phi that reduce the number of variables from n to r but maintain trdeg{\phi(f_i)}_i = r, assuming f_i's sparse and small r. We apply these fundamental maps to solve several cases of blackbox identity testing: (1) Given a polynomial-degree circuit C and sparse polynomials f_1, ..., f_m with trdeg r, we can test blackbox D := C(f_1, ..., f_m) for zeroness in poly(size(D))^r time. (2) Define a spsp_\delta(k,s,n) circuit C to be of the form \sum_{i=1}^k \prod_{j=1}^s f_{i,j}, where f_{i,j} are sparse n-variate polynomials of degree at most \delta. For k = 2 we give a poly(sn\delta)^{\delta^2} time blackbox identity test. (3) For a general depth-4 circuit we define a notion of rank. Assuming there is a rank bound R for minimal simple spsp_\delta(k,s,n) identities, we give a poly(snR\delta)^{Rk\delta^2} time blackbox identity test for spsp_\delta(k,s,n) circuits. This partially generalizes the state of the art of depth-3 to depth-4 circuits. The notion of trdeg works best with large or zero characteristic, but we also give versions of our results for arbitrary fields.
|
2211.11958
|
Piji Li
|
Xuan Sheng, Zhaoyang Han, Piji Li, Xiangmao Chang
|
A Survey on Backdoor Attack and Defense in Natural Language Processing
|
12 pages, QRS2022
| null | null | null |
cs.CL cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning is becoming increasingly popular in real-life applications,
especially in natural language processing (NLP). Users often choose training
outsourcing or adopt third-party data and models due to data and computation
resources being limited. In such a situation, training data and models are
exposed to the public. As a result, attackers can manipulate the training
process to inject some triggers into the model, which is called backdoor
attack. Backdoor attack is quite stealthy and difficult to be detected because
it has little inferior influence on the model's performance for the clean
samples. To get a precise grasp and understanding of this problem, in this
paper, we conduct a comprehensive review of backdoor attacks and defenses in
the field of NLP. Besides, we summarize benchmark datasets and point out the
open issues to design credible systems to defend against backdoor attacks.
|
[
{
"created": "Tue, 22 Nov 2022 02:35:12 GMT",
"version": "v1"
}
] |
2022-11-23
|
[
[
"Sheng",
"Xuan",
""
],
[
"Han",
"Zhaoyang",
""
],
[
"Li",
"Piji",
""
],
[
"Chang",
"Xiangmao",
""
]
] |
Deep learning is becoming increasingly popular in real-life applications, especially in natural language processing (NLP). Users often choose training outsourcing or adopt third-party data and models due to data and computation resources being limited. In such a situation, training data and models are exposed to the public. As a result, attackers can manipulate the training process to inject some triggers into the model, which is called backdoor attack. Backdoor attack is quite stealthy and difficult to be detected because it has little inferior influence on the model's performance for the clean samples. To get a precise grasp and understanding of this problem, in this paper, we conduct a comprehensive review of backdoor attacks and defenses in the field of NLP. Besides, we summarize benchmark datasets and point out the open issues to design credible systems to defend against backdoor attacks.
|
1511.07545
|
Hailin Shi
|
Hailin Shi and Xiangyu Zhu and Shengcai Liao and Zhen Lei and Yang
Yang and Stan Z. Li
|
Constrained Deep Metric Learning for Person Re-identification
|
11 pages, 16 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person re-identification aims to re-identify the probe image from a given set
of images under different camera views. It is challenging due to large
variations of pose, illumination, occlusion and camera view. Since the
convolutional neural networks (CNN) have excellent capability of feature
extraction, certain deep learning methods have been recently applied in person
re-identification. However, in person re-identification, the deep networks
often suffer from the over-fitting problem. In this paper, we propose a novel
CNN-based method to learn a discriminative metric with good robustness to the
over-fitting problem in person re-identification. Firstly, a novel deep
architecture is built where the Mahalanobis metric is learned with a weight
constraint. This weight constraint is used to regularize the learning, so that
the learned metric has a better generalization ability. Secondly, we find that
the selection of intra-class sample pairs is crucial for learning but has
received little attention. To cope with the large intra-class variations in
pedestrian images, we propose a novel training strategy named moderate positive
mining to prevent the training process from over-fitting to the extreme samples
in intra-class pairs. Experiments show that our approach significantly
outperforms state-of-the-art methods on several benchmarks of person
re-identification.
|
[
{
"created": "Tue, 24 Nov 2015 02:46:35 GMT",
"version": "v1"
}
] |
2015-11-25
|
[
[
"Shi",
"Hailin",
""
],
[
"Zhu",
"Xiangyu",
""
],
[
"Liao",
"Shengcai",
""
],
[
"Lei",
"Zhen",
""
],
[
"Yang",
"Yang",
""
],
[
"Li",
"Stan Z.",
""
]
] |
Person re-identification aims to re-identify the probe image from a given set of images under different camera views. It is challenging due to large variations of pose, illumination, occlusion and camera view. Since the convolutional neural networks (CNN) have excellent capability of feature extraction, certain deep learning methods have been recently applied in person re-identification. However, in person re-identification, the deep networks often suffer from the over-fitting problem. In this paper, we propose a novel CNN-based method to learn a discriminative metric with good robustness to the over-fitting problem in person re-identification. Firstly, a novel deep architecture is built where the Mahalanobis metric is learned with a weight constraint. This weight constraint is used to regularize the learning, so that the learned metric has a better generalization ability. Secondly, we find that the selection of intra-class sample pairs is crucial for learning but has received little attention. To cope with the large intra-class variations in pedestrian images, we propose a novel training strategy named moderate positive mining to prevent the training process from over-fitting to the extreme samples in intra-class pairs. Experiments show that our approach significantly outperforms state-of-the-art methods on several benchmarks of person re-identification.
|
2305.19798
|
Francesco Tonin
|
Yingyi Chen, Qinghua Tao, Francesco Tonin, Johan A.K. Suykens
|
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal
Representation
|
NeurIPS 2023. We provide a primal-dual representation for the
asymmetric self-attention in transformer that allows to avoid explicit
computation of the kernel matrix
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, a new line of works has emerged to understand and improve
self-attention in Transformers by treating it as a kernel machine. However,
existing works apply the methods for symmetric kernels to the asymmetric
self-attention, resulting in a nontrivial gap between the analytical
understanding and numerical implementation. In this paper, we provide a new
perspective to represent and optimize self-attention through asymmetric Kernel
Singular Value Decomposition (KSVD), which is also motivated by the low-rank
property of self-attention normally observed in deep layers. Through asymmetric
KSVD, $i$) a primal-dual representation of self-attention is formulated, where
the optimization objective is cast to maximize the projection variances in the
attention outputs; $ii$) a novel attention mechanism, i.e., Primal-Attention,
is proposed via the primal representation of KSVD, avoiding explicit
computation of the kernel matrix in the dual; $iii$) with KKT conditions, we
prove that the stationary solution to the KSVD optimization in Primal-Attention
yields a zero-value objective. In this manner, KSVD optimization can be
implemented by simply minimizing a regularization loss, so that low-rank
property is promoted without extra decomposition. Numerical experiments show
state-of-the-art performance of our Primal-Attention with improved efficiency.
Moreover, we demonstrate that the deployed KSVD optimization regularizes
Primal-Attention with a sharper singular value decay than that of the canonical
self-attention, further verifying the great potential of our method. To the
best of our knowledge, this is the first work that provides a primal-dual
representation for the asymmetric kernel in self-attention and successfully
applies it to modeling and optimization.
|
[
{
"created": "Wed, 31 May 2023 12:38:24 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2023 09:26:05 GMT",
"version": "v2"
}
] |
2023-12-06
|
[
[
"Chen",
"Yingyi",
""
],
[
"Tao",
"Qinghua",
""
],
[
"Tonin",
"Francesco",
""
],
[
"Suykens",
"Johan A. K.",
""
]
] |
Recently, a new line of works has emerged to understand and improve self-attention in Transformers by treating it as a kernel machine. However, existing works apply the methods for symmetric kernels to the asymmetric self-attention, resulting in a nontrivial gap between the analytical understanding and numerical implementation. In this paper, we provide a new perspective to represent and optimize self-attention through asymmetric Kernel Singular Value Decomposition (KSVD), which is also motivated by the low-rank property of self-attention normally observed in deep layers. Through asymmetric KSVD, $i$) a primal-dual representation of self-attention is formulated, where the optimization objective is cast to maximize the projection variances in the attention outputs; $ii$) a novel attention mechanism, i.e., Primal-Attention, is proposed via the primal representation of KSVD, avoiding explicit computation of the kernel matrix in the dual; $iii$) with KKT conditions, we prove that the stationary solution to the KSVD optimization in Primal-Attention yields a zero-value objective. In this manner, KSVD optimization can be implemented by simply minimizing a regularization loss, so that low-rank property is promoted without extra decomposition. Numerical experiments show state-of-the-art performance of our Primal-Attention with improved efficiency. Moreover, we demonstrate that the deployed KSVD optimization regularizes Primal-Attention with a sharper singular value decay than that of the canonical self-attention, further verifying the great potential of our method. To the best of our knowledge, this is the first work that provides a primal-dual representation for the asymmetric kernel in self-attention and successfully applies it to modeling and optimization.
|
2311.11652
|
Sha Wang
|
Sha Wang, Yuchen Li, Hanhua Xiao, Lambert Deng, Yanfei Dong
|
Web News Timeline Generation with Extended Task Prompting
|
4 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The creation of news timeline is essential for a comprehensive and contextual
understanding of events as they unfold over time. This approach aids in
discerning patterns and trends that might be obscured when news is viewed in
isolation. By organizing news in a chronological sequence, it becomes easier to
track the development of stories, understand the interrelation of events, and
grasp the broader implications of news items. This is particularly helpful in
sectors like finance and insurance, where timely understanding of the event
development-ranging from extreme weather to political upheavals and health
crises-is indispensable for effective risk management. While traditional
natural language processing (NLP) techniques have had some success, they often
fail to capture the news with nuanced relevance that are readily apparent to
domain experts, hindering broader industry integration. The advance of Large
Language Models (LLMs) offers a renewed opportunity to tackle this challenge.
However, direct prompting LLMs for this task is often ineffective. Our study
investigates the application of an extended task prompting technique to assess
past news relevance. We demonstrate that enhancing conventional prompts with
additional tasks boosts their effectiveness on various news dataset, rendering
news timeline generation practical for professional use. This work has been
deployed as a publicly accessible browser extension which is adopted within our
network.
|
[
{
"created": "Mon, 20 Nov 2023 10:38:22 GMT",
"version": "v1"
}
] |
2023-11-21
|
[
[
"Wang",
"Sha",
""
],
[
"Li",
"Yuchen",
""
],
[
"Xiao",
"Hanhua",
""
],
[
"Deng",
"Lambert",
""
],
[
"Dong",
"Yanfei",
""
]
] |
The creation of news timeline is essential for a comprehensive and contextual understanding of events as they unfold over time. This approach aids in discerning patterns and trends that might be obscured when news is viewed in isolation. By organizing news in a chronological sequence, it becomes easier to track the development of stories, understand the interrelation of events, and grasp the broader implications of news items. This is particularly helpful in sectors like finance and insurance, where timely understanding of the event development-ranging from extreme weather to political upheavals and health crises-is indispensable for effective risk management. While traditional natural language processing (NLP) techniques have had some success, they often fail to capture the news with nuanced relevance that are readily apparent to domain experts, hindering broader industry integration. The advance of Large Language Models (LLMs) offers a renewed opportunity to tackle this challenge. However, direct prompting LLMs for this task is often ineffective. Our study investigates the application of an extended task prompting technique to assess past news relevance. We demonstrate that enhancing conventional prompts with additional tasks boosts their effectiveness on various news dataset, rendering news timeline generation practical for professional use. This work has been deployed as a publicly accessible browser extension which is adopted within our network.
|
2305.17733
|
Hao Yang
|
Hao Yang, Jinming Zhao, Gholamreza Haffari, Ehsan Shareghi
|
Investigating Pre-trained Audio Encoders in the Low-Resource Condition
|
INTERSPEECH 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained speech encoders have been central to pushing state-of-the-art
results across various speech understanding and generation tasks. Nonetheless,
the capabilities of these encoders in low-resource settings are yet to be
thoroughly explored. To address this, we conduct a comprehensive set of
experiments using a representative set of 3 state-of-the-art encoders
(Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech
understanding and generation tasks. We provide various quantitative and
qualitative analyses on task performance, convergence speed, and
representational properties of the encoders. We observe a connection between
the pre-training protocols of these encoders and the way in which they capture
information in their internal layers. In particular, we observe the Whisper
encoder exhibits the greatest low-resource capabilities on content-driven tasks
in terms of performance and convergence speed.
|
[
{
"created": "Sun, 28 May 2023 14:15:19 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Yang",
"Hao",
""
],
[
"Zhao",
"Jinming",
""
],
[
"Haffari",
"Gholamreza",
""
],
[
"Shareghi",
"Ehsan",
""
]
] |
Pre-trained speech encoders have been central to pushing state-of-the-art results across various speech understanding and generation tasks. Nonetheless, the capabilities of these encoders in low-resource settings are yet to be thoroughly explored. To address this, we conduct a comprehensive set of experiments using a representative set of 3 state-of-the-art encoders (Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech understanding and generation tasks. We provide various quantitative and qualitative analyses on task performance, convergence speed, and representational properties of the encoders. We observe a connection between the pre-training protocols of these encoders and the way in which they capture information in their internal layers. In particular, we observe the Whisper encoder exhibits the greatest low-resource capabilities on content-driven tasks in terms of performance and convergence speed.
|
2006.08731
|
Johannes Vass
|
Johannes Vass, Marie-Louise Lackner, Nysret Musliu
|
Exact and Metaheuristic Approaches for the Production Leveling Problem
|
Instance set is published under
https://dbai.tuwien.ac.at/staff/jvass/production-leveling/
| null | null | null |
cs.AI cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce a new problem in the field of production planning
which we call the Production Leveling Problem. The task is to assign orders to
production periods such that the load in each period and on each production
resource is balanced, capacity limits are not exceeded and the orders'
priorities are taken into account. Production Leveling is an important
intermediate step between long-term planning and the final scheduling of orders
within a production period, as it is responsible for selecting good subsets of
orders to be scheduled within each period.
A formal model of the problem is proposed and NP-hardness is shown by
reduction from Bin Backing. As an exact method for solving moderately sized
instances we introduce a MIP formulation. For solving large problem instances,
metaheuristic local search is investigated. A greedy heuristic and two
neighborhood structures for local search are proposed, in order to apply them
using Variable Neighborhood Descent and Simulated Annealing. Regarding exact
techniques, the main question of research is, up to which size instances are
solvable within a fixed amount of time. For the metaheuristic approaches the
aim is to show that they produce near-optimal solutions for smaller instances,
but also scale well to very large instances.
A set of realistic problem instances from an industrial partner is
contributed to the literature, as well as random instance generators. The
experimental evaluation conveys that the proposed MIP model works well for
instances with up to 250 orders. Out of the investigated metaheuristic
approaches, Simulated Annealing achieves the best results. It is shown to
produce solutions with less than 3% average optimality gap on small instances
and to scale well up to thousands of orders and dozens of periods and products.
The presented metaheuristic methods are already being used in the industry.
|
[
{
"created": "Mon, 15 Jun 2020 20:04:59 GMT",
"version": "v1"
}
] |
2020-06-17
|
[
[
"Vass",
"Johannes",
""
],
[
"Lackner",
"Marie-Louise",
""
],
[
"Musliu",
"Nysret",
""
]
] |
In this paper we introduce a new problem in the field of production planning which we call the Production Leveling Problem. The task is to assign orders to production periods such that the load in each period and on each production resource is balanced, capacity limits are not exceeded and the orders' priorities are taken into account. Production Leveling is an important intermediate step between long-term planning and the final scheduling of orders within a production period, as it is responsible for selecting good subsets of orders to be scheduled within each period. A formal model of the problem is proposed and NP-hardness is shown by reduction from Bin Backing. As an exact method for solving moderately sized instances we introduce a MIP formulation. For solving large problem instances, metaheuristic local search is investigated. A greedy heuristic and two neighborhood structures for local search are proposed, in order to apply them using Variable Neighborhood Descent and Simulated Annealing. Regarding exact techniques, the main question of research is, up to which size instances are solvable within a fixed amount of time. For the metaheuristic approaches the aim is to show that they produce near-optimal solutions for smaller instances, but also scale well to very large instances. A set of realistic problem instances from an industrial partner is contributed to the literature, as well as random instance generators. The experimental evaluation conveys that the proposed MIP model works well for instances with up to 250 orders. Out of the investigated metaheuristic approaches, Simulated Annealing achieves the best results. It is shown to produce solutions with less than 3% average optimality gap on small instances and to scale well up to thousands of orders and dozens of periods and products. The presented metaheuristic methods are already being used in the industry.
|
1102.2915
|
Filippo Utro
|
Filippo Utro
|
Algorithms for Internal Validation Clustering Measures in the Post
Genomic Era
| null |
PhD Thesis, University of Palermo, Italy, 2011
| null | null |
cs.DS q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inferring cluster structure in microarray datasets is a fundamental task for
the -omic sciences. A fundamental question in Statistics, Data Analysis and
Classification, is the prediction of the number of clusters in a dataset,
usually established via internal validation measures. Despite the wealth of
internal measures available in the literature, new ones have been recently
proposed, some of them specifically for microarray data. In this dissertation,
a study of internal validation measures is given, paying particular attention
to the stability based ones. Indeed, this class of measures is particularly
prominent and promising in order to have a reliable estimate the number of
clusters in a dataset. For those measures, a new general algorithmic paradigm
is proposed here that highlights the richness of measures in this class and
accounts for the ones already available in the literature. Moreover, some of
the most representative validation measures are also considered. Experiments on
12 benchmark datasets are performed in order to assess both the intrinsic
ability of a measure to predict the correct number of clusters in a dataset and
its merit relative to the other measures. The main result is a hierarchy of
internal validation measures in terms of precision and speed, highlighting some
of their merits and limitations not reported before in the literature. This
hierarchy shows that the faster the measure, the less accurate it is. In order
to reduce the time performance gap between the fastest and the most precise
measures, the technique of designing fast approximation algorithms is
systematically applied. The end result is a speed-up of many of the measures
studied here that brings the gap between the fastest and the most precise
within one order of magnitude in time, with no degradation in their prediction
power. Prior to this work, the time gap was at least two orders of magnitude.
|
[
{
"created": "Mon, 14 Feb 2011 22:13:47 GMT",
"version": "v1"
}
] |
2011-02-16
|
[
[
"Utro",
"Filippo",
""
]
] |
Inferring cluster structure in microarray datasets is a fundamental task for the -omic sciences. A fundamental question in Statistics, Data Analysis and Classification, is the prediction of the number of clusters in a dataset, usually established via internal validation measures. Despite the wealth of internal measures available in the literature, new ones have been recently proposed, some of them specifically for microarray data. In this dissertation, a study of internal validation measures is given, paying particular attention to the stability based ones. Indeed, this class of measures is particularly prominent and promising in order to have a reliable estimate the number of clusters in a dataset. For those measures, a new general algorithmic paradigm is proposed here that highlights the richness of measures in this class and accounts for the ones already available in the literature. Moreover, some of the most representative validation measures are also considered. Experiments on 12 benchmark datasets are performed in order to assess both the intrinsic ability of a measure to predict the correct number of clusters in a dataset and its merit relative to the other measures. The main result is a hierarchy of internal validation measures in terms of precision and speed, highlighting some of their merits and limitations not reported before in the literature. This hierarchy shows that the faster the measure, the less accurate it is. In order to reduce the time performance gap between the fastest and the most precise measures, the technique of designing fast approximation algorithms is systematically applied. The end result is a speed-up of many of the measures studied here that brings the gap between the fastest and the most precise within one order of magnitude in time, with no degradation in their prediction power. Prior to this work, the time gap was at least two orders of magnitude.
|
2006.04058
|
Thoudam Doren Singh
|
Alok Singh, Thoudam Doren Singh and Sivaji Bandyopadhyay
|
NITS-VC System for VATEX Video Captioning Challenge 2020
|
Workshop on Language & Vision with applications to Video
Understanding (LVVU 2020) - In conjunction with CVPR 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video captioning is process of summarising the content, event and action of
the video into a short textual form which can be helpful in many research areas
such as video guided machine translation, video sentiment analysis and
providing aid to needy individual. In this paper, a system description of the
framework used for VATEX-2020 video captioning challenge is presented. We
employ an encoder-decoder based approach in which the visual features of the
video are encoded using 3D convolutional neural network (C3D) and in the
decoding phase two Long Short Term Memory (LSTM) recurrent networks are used in
which visual features and input captions are fused separately and final output
is generated by performing element-wise product between the output of both
LSTMs. Our model is able to achieve BLEU scores of 0.20 and 0.22 on public and
private test data sets respectively.
|
[
{
"created": "Sun, 7 Jun 2020 06:39:56 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Sep 2020 14:05:13 GMT",
"version": "v2"
}
] |
2020-09-28
|
[
[
"Singh",
"Alok",
""
],
[
"Singh",
"Thoudam Doren",
""
],
[
"Bandyopadhyay",
"Sivaji",
""
]
] |
Video captioning is process of summarising the content, event and action of the video into a short textual form which can be helpful in many research areas such as video guided machine translation, video sentiment analysis and providing aid to needy individual. In this paper, a system description of the framework used for VATEX-2020 video captioning challenge is presented. We employ an encoder-decoder based approach in which the visual features of the video are encoded using 3D convolutional neural network (C3D) and in the decoding phase two Long Short Term Memory (LSTM) recurrent networks are used in which visual features and input captions are fused separately and final output is generated by performing element-wise product between the output of both LSTMs. Our model is able to achieve BLEU scores of 0.20 and 0.22 on public and private test data sets respectively.
|
2110.01295
|
Ruben Kruiper
|
Ruben Kruiper, Ioannis Konstas, Alasdair Gray, Farhad Sadeghineko,
Richard Watson and Bimal Kumar
|
SPaR.txt, a cheap Shallow Parsing approach for Regulatory texts
|
To be published in the NLLP workshop at EMNLP 2021, 9 pages (15
including reference and appendices). For the ScotReg corpus, SPaR.txt dataset
and code see: http://github.com/rubenkruiper/SPaR.txt
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automated Compliance Checking (ACC) systems aim to semantically parse
building regulations to a set of rules. However, semantic parsing is known to
be hard and requires large amounts of training data. The complexity of creating
such training data has led to research that focuses on small sub-tasks, such as
shallow parsing or the extraction of a limited subset of rules. This study
introduces a shallow parsing task for which training data is relatively cheap
to create, with the aim of learning a lexicon for ACC. We annotate a small
domain-specific dataset of 200 sentences, SPaR.txt, and train a sequence tagger
that achieves 79,93 F1-score on the test set. We then show through manual
evaluation that the model identifies most (89,84%) defined terms in a set of
building regulation documents, and that both contiguous and discontiguous
Multi-Word Expressions (MWE) are discovered with reasonable accuracy (70,3%).
|
[
{
"created": "Mon, 4 Oct 2021 10:00:22 GMT",
"version": "v1"
}
] |
2021-10-05
|
[
[
"Kruiper",
"Ruben",
""
],
[
"Konstas",
"Ioannis",
""
],
[
"Gray",
"Alasdair",
""
],
[
"Sadeghineko",
"Farhad",
""
],
[
"Watson",
"Richard",
""
],
[
"Kumar",
"Bimal",
""
]
] |
Automated Compliance Checking (ACC) systems aim to semantically parse building regulations to a set of rules. However, semantic parsing is known to be hard and requires large amounts of training data. The complexity of creating such training data has led to research that focuses on small sub-tasks, such as shallow parsing or the extraction of a limited subset of rules. This study introduces a shallow parsing task for which training data is relatively cheap to create, with the aim of learning a lexicon for ACC. We annotate a small domain-specific dataset of 200 sentences, SPaR.txt, and train a sequence tagger that achieves 79,93 F1-score on the test set. We then show through manual evaluation that the model identifies most (89,84%) defined terms in a set of building regulation documents, and that both contiguous and discontiguous Multi-Word Expressions (MWE) are discovered with reasonable accuracy (70,3%).
|
1908.09970
|
Raef Bassily
|
Raef Bassily, Vitaly Feldman, Kunal Talwar, Abhradeep Thakurta
|
Private Stochastic Convex Optimization with Optimal Rates
| null | null | null | null |
cs.LG cs.CR cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study differentially private (DP) algorithms for stochastic convex
optimization (SCO). In this problem the goal is to approximately minimize the
population loss given i.i.d. samples from a distribution over convex and
Lipschitz loss functions. A long line of existing work on private convex
optimization focuses on the empirical loss and derives asymptotically tight
bounds on the excess empirical loss. However a significant gap exists in the
known bounds for the population loss. We show that, up to logarithmic factors,
the optimal excess population loss for DP algorithms is equal to the larger of
the optimal non-private excess population loss, and the optimal excess
empirical loss of DP algorithms. This implies that, contrary to intuition based
on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as
non-private SCO in the parameter regime most common in practice. The best
previous result in this setting gives rate of $1/n^{1/4}$. Our approach builds
on existing differentially private algorithms and relies on the analysis of
algorithmic stability to ensure generalization.
|
[
{
"created": "Tue, 27 Aug 2019 00:50:27 GMT",
"version": "v1"
}
] |
2019-08-28
|
[
[
"Bassily",
"Raef",
""
],
[
"Feldman",
"Vitaly",
""
],
[
"Talwar",
"Kunal",
""
],
[
"Thakurta",
"Abhradeep",
""
]
] |
We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d. samples from a distribution over convex and Lipschitz loss functions. A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss. However a significant gap exists in the known bounds for the population loss. We show that, up to logarithmic factors, the optimal excess population loss for DP algorithms is equal to the larger of the optimal non-private excess population loss, and the optimal excess empirical loss of DP algorithms. This implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of $1/\sqrt{n}$ as non-private SCO in the parameter regime most common in practice. The best previous result in this setting gives rate of $1/n^{1/4}$. Our approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization.
|
2404.12335
|
Nick Feng
|
Nick Feng, Lina Marsso, S. Getir Yaman, Isobel Standen, Yesugen
Baatartogtokh, Reem Ayad, Vict\'oria Oldemburgo de Mello, Bev Townsend, Hanne
Bartels, Ana Cavalcanti, Radu Calinescu, Marsha Chechik
|
Normative Requirements Operationalization with Large Language Models
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Normative non-functional requirements specify constraints that a system must
observe in order to avoid violations of social, legal, ethical, empathetic, and
cultural norms. As these requirements are typically defined by non-technical
system stakeholders with different expertise and priorities (ethicists,
lawyers, social scientists, etc.), ensuring their well-formedness and
consistency is very challenging. Recent research has tackled this challenge
using a domain-specific language to specify normative requirements as rules
whose consistency can then be analysed with formal methods. In this paper, we
propose a complementary approach that uses Large Language Models to extract
semantic relationships between abstract representations of system capabilities.
These relations, which are often assumed implicitly by non-technical
stakeholders (e.g., based on common sense or domain knowledge), are then used
to enrich the automated reasoning techniques for eliciting and analyzing the
consistency of normative requirements. We show the effectiveness of our
approach to normative requirements elicitation and operationalization through a
range of real-world case studies.
|
[
{
"created": "Thu, 18 Apr 2024 17:01:34 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2024 01:19:52 GMT",
"version": "v2"
}
] |
2024-05-30
|
[
[
"Feng",
"Nick",
""
],
[
"Marsso",
"Lina",
""
],
[
"Yaman",
"S. Getir",
""
],
[
"Standen",
"Isobel",
""
],
[
"Baatartogtokh",
"Yesugen",
""
],
[
"Ayad",
"Reem",
""
],
[
"de Mello",
"Victória Oldemburgo",
""
],
[
"Townsend",
"Bev",
""
],
[
"Bartels",
"Hanne",
""
],
[
"Cavalcanti",
"Ana",
""
],
[
"Calinescu",
"Radu",
""
],
[
"Chechik",
"Marsha",
""
]
] |
Normative non-functional requirements specify constraints that a system must observe in order to avoid violations of social, legal, ethical, empathetic, and cultural norms. As these requirements are typically defined by non-technical system stakeholders with different expertise and priorities (ethicists, lawyers, social scientists, etc.), ensuring their well-formedness and consistency is very challenging. Recent research has tackled this challenge using a domain-specific language to specify normative requirements as rules whose consistency can then be analysed with formal methods. In this paper, we propose a complementary approach that uses Large Language Models to extract semantic relationships between abstract representations of system capabilities. These relations, which are often assumed implicitly by non-technical stakeholders (e.g., based on common sense or domain knowledge), are then used to enrich the automated reasoning techniques for eliciting and analyzing the consistency of normative requirements. We show the effectiveness of our approach to normative requirements elicitation and operationalization through a range of real-world case studies.
|
2103.05174
|
Pavan Samtani
|
Pavan Samtani, Francisco Leiva, Javier Ruiz-del-Solar
|
Learning to Play Soccer From Scratch: Sample-Efficient Emergent
Coordination through Curriculum-Learning and Competition
| null | null | null | null |
cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes a scheme that allows learning complex multi-agent
behaviors in a sample efficient manner, applied to 2v2 soccer. The problem is
formulated as a Markov game, and solved using deep reinforcement learning. We
propose a basic multi-agent extension of TD3 for learning the policy of each
player, in a decentralized manner. To ease learning, the task of 2v2 soccer is
divided in three stages: 1v0, 1v1 and 2v2. The process of learning in
multi-agent stages (1v1 and 2v2) uses agents trained on a previous stage as
fixed opponents. In addition, we propose using experience sharing, a method
that shares experience from a fixed opponent, trained in a previous stage, for
training the agent currently learning, and a form of frame-skipping, to raise
performance significantly. Our results show that high quality soccer play can
be obtained with our approach in just under 40M interactions. A summarized
video of the resulting game play can be found in https://youtu.be/f25l1j1U9RM.
|
[
{
"created": "Tue, 9 Mar 2021 01:57:16 GMT",
"version": "v1"
}
] |
2021-03-10
|
[
[
"Samtani",
"Pavan",
""
],
[
"Leiva",
"Francisco",
""
],
[
"Ruiz-del-Solar",
"Javier",
""
]
] |
This work proposes a scheme that allows learning complex multi-agent behaviors in a sample efficient manner, applied to 2v2 soccer. The problem is formulated as a Markov game, and solved using deep reinforcement learning. We propose a basic multi-agent extension of TD3 for learning the policy of each player, in a decentralized manner. To ease learning, the task of 2v2 soccer is divided in three stages: 1v0, 1v1 and 2v2. The process of learning in multi-agent stages (1v1 and 2v2) uses agents trained on a previous stage as fixed opponents. In addition, we propose using experience sharing, a method that shares experience from a fixed opponent, trained in a previous stage, for training the agent currently learning, and a form of frame-skipping, to raise performance significantly. Our results show that high quality soccer play can be obtained with our approach in just under 40M interactions. A summarized video of the resulting game play can be found in https://youtu.be/f25l1j1U9RM.
|
2003.06112
|
Linh Ngo
|
Ngo Van Linh, Tran Xuan Bach and Khoat Than
|
A Graph Convolutional Topic Model for Short and Noisy Text Streams
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning hidden topics from data streams has become absolutely necessary but
posed challenging problems such as concept drift as well as short and noisy
data. Using prior knowledge to enrich a topic model is one of potential
solutions to cope with these challenges. Prior knowledge that is derived from
human knowledge (e.g. Wordnet) or a pre-trained model (e.g. Word2vec) is very
valuable and useful to help topic models work better. However, in a streaming
environment where data arrives continually and infinitely, existing studies are
limited to exploiting these resources effectively. Especially, a knowledge
graph, that contains meaningful word relations, is ignored. In this paper, to
aim at exploiting a knowledge graph effectively, we propose a novel graph
convolutional topic model (GCTM) which integrates graph convolutional networks
(GCN) into a topic model and a learning method which learns the networks and
the topic model simultaneously for data streams. In each minibatch, our method
not only can exploit an external knowledge graph but also can balance the
external and old knowledge to perform well on new data. We conduct extensive
experiments to evaluate our method with both a human knowledge graph (Wordnet)
and a graph built from pre-trained word embeddings (Word2vec). The experimental
results show that our method achieves significantly better performances than
state-of-the-art baselines in terms of probabilistic predictive measure and
topic coherence. In particular, our method can work well when dealing with
short texts as well as concept drift. The implementation of GCTM is available
at \url{https://github.com/bachtranxuan/GCTM.git}.
|
[
{
"created": "Fri, 13 Mar 2020 05:09:00 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Mar 2020 06:43:33 GMT",
"version": "v2"
},
{
"created": "Sat, 6 Feb 2021 03:51:01 GMT",
"version": "v3"
},
{
"created": "Fri, 24 Dec 2021 02:26:38 GMT",
"version": "v4"
}
] |
2021-12-28
|
[
[
"Van Linh",
"Ngo",
""
],
[
"Bach",
"Tran Xuan",
""
],
[
"Than",
"Khoat",
""
]
] |
Learning hidden topics from data streams has become absolutely necessary but posed challenging problems such as concept drift as well as short and noisy data. Using prior knowledge to enrich a topic model is one of potential solutions to cope with these challenges. Prior knowledge that is derived from human knowledge (e.g. Wordnet) or a pre-trained model (e.g. Word2vec) is very valuable and useful to help topic models work better. However, in a streaming environment where data arrives continually and infinitely, existing studies are limited to exploiting these resources effectively. Especially, a knowledge graph, that contains meaningful word relations, is ignored. In this paper, to aim at exploiting a knowledge graph effectively, we propose a novel graph convolutional topic model (GCTM) which integrates graph convolutional networks (GCN) into a topic model and a learning method which learns the networks and the topic model simultaneously for data streams. In each minibatch, our method not only can exploit an external knowledge graph but also can balance the external and old knowledge to perform well on new data. We conduct extensive experiments to evaluate our method with both a human knowledge graph (Wordnet) and a graph built from pre-trained word embeddings (Word2vec). The experimental results show that our method achieves significantly better performances than state-of-the-art baselines in terms of probabilistic predictive measure and topic coherence. In particular, our method can work well when dealing with short texts as well as concept drift. The implementation of GCTM is available at \url{https://github.com/bachtranxuan/GCTM.git}.
|
2111.02964
|
Rohan Chandra
|
Rohan Chandra, Aniket Bera, Dinesh Manocha
|
Using Graph-Theoretic Machine Learning to Predict Human Driver Behavior
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Studies have shown that autonomous vehicles (AVs) behave conservatively in a
traffic environment composed of human drivers and do not adapt to local
conditions and socio-cultural norms. It is known that socially aware AVs can be
designed if there exists a mechanism to understand the behaviors of human
drivers. We present an approach that leverages machine learning to predict, the
behaviors of human drivers. This is similar to how humans implicitly interpret
the behaviors of drivers on the road, by only observing the trajectories of
their vehicles. We use graph-theoretic tools to extract driver behavior
features from the trajectories and machine learning to obtain a computational
mapping between the extracted trajectory of a vehicle in traffic and the driver
behaviors. Compared to prior approaches in this domain, we prove that our
method is robust, general, and extendable to broad-ranging applications such as
autonomous navigation. We evaluate our approach on real-world traffic datasets
captured in the U.S., India, China, and Singapore, as well as in simulation.
|
[
{
"created": "Thu, 4 Nov 2021 15:57:10 GMT",
"version": "v1"
}
] |
2021-11-05
|
[
[
"Chandra",
"Rohan",
""
],
[
"Bera",
"Aniket",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
Studies have shown that autonomous vehicles (AVs) behave conservatively in a traffic environment composed of human drivers and do not adapt to local conditions and socio-cultural norms. It is known that socially aware AVs can be designed if there exists a mechanism to understand the behaviors of human drivers. We present an approach that leverages machine learning to predict, the behaviors of human drivers. This is similar to how humans implicitly interpret the behaviors of drivers on the road, by only observing the trajectories of their vehicles. We use graph-theoretic tools to extract driver behavior features from the trajectories and machine learning to obtain a computational mapping between the extracted trajectory of a vehicle in traffic and the driver behaviors. Compared to prior approaches in this domain, we prove that our method is robust, general, and extendable to broad-ranging applications such as autonomous navigation. We evaluate our approach on real-world traffic datasets captured in the U.S., India, China, and Singapore, as well as in simulation.
|
2403.01487
|
Haogeng Liu
|
Haogeng Liu, Quanzeng You, Xiaotian Han, Yiqi Wang, Bohan Zhai,
Yongfei Liu, Yunzhe Tao, Huaibo Huang, Ran He, Hongxia Yang
|
InfiMM-HD: A Leap Forward in High-Resolution Multimodal Understanding
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal Large Language Models (MLLMs) have experienced significant
advancements recently. Nevertheless, challenges persist in the accurate
recognition and comprehension of intricate details within high-resolution
images. Despite being indispensable for the development of robust MLLMs, this
area remains underinvestigated. To tackle this challenge, our work introduces
InfiMM-HD, a novel architecture specifically designed for processing images of
different resolutions with low computational overhead. This innovation
facilitates the enlargement of MLLMs to higher-resolution capabilities.
InfiMM-HD incorporates a cross-attention module and visual windows to reduce
computation costs. By integrating this architectural design with a four-stage
training pipeline, our model attains improved visual perception efficiently and
cost-effectively. Empirical study underscores the robustness and effectiveness
of InfiMM-HD, opening new avenues for exploration in related areas. Codes and
models can be found at https://huggingface.co/Infi-MM/infimm-hd
|
[
{
"created": "Sun, 3 Mar 2024 11:39:41 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Liu",
"Haogeng",
""
],
[
"You",
"Quanzeng",
""
],
[
"Han",
"Xiaotian",
""
],
[
"Wang",
"Yiqi",
""
],
[
"Zhai",
"Bohan",
""
],
[
"Liu",
"Yongfei",
""
],
[
"Tao",
"Yunzhe",
""
],
[
"Huang",
"Huaibo",
""
],
[
"He",
"Ran",
""
],
[
"Yang",
"Hongxia",
""
]
] |
Multimodal Large Language Models (MLLMs) have experienced significant advancements recently. Nevertheless, challenges persist in the accurate recognition and comprehension of intricate details within high-resolution images. Despite being indispensable for the development of robust MLLMs, this area remains underinvestigated. To tackle this challenge, our work introduces InfiMM-HD, a novel architecture specifically designed for processing images of different resolutions with low computational overhead. This innovation facilitates the enlargement of MLLMs to higher-resolution capabilities. InfiMM-HD incorporates a cross-attention module and visual windows to reduce computation costs. By integrating this architectural design with a four-stage training pipeline, our model attains improved visual perception efficiently and cost-effectively. Empirical study underscores the robustness and effectiveness of InfiMM-HD, opening new avenues for exploration in related areas. Codes and models can be found at https://huggingface.co/Infi-MM/infimm-hd
|
2406.00773
|
Jincheng Zhong
|
Jincheng Zhong, Xingzhuo Guo, Jiaxiang Dong, Mingsheng Long
|
Diffusion Tuning: Transferring Diffusion Models via Chain of Forgetting
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion models have significantly advanced the field of generative
modeling. However, training a diffusion model is computationally expensive,
creating a pressing need to adapt off-the-shelf diffusion models for downstream
generation tasks. Current fine-tuning methods focus on parameter-efficient
transfer learning but overlook the fundamental transfer characteristics of
diffusion models. In this paper, we investigate the transferability of
diffusion models and observe a monotonous chain of forgetting trend of
transferability along the reverse process. Based on this observation and novel
theoretical insights, we present Diff-Tuning, a frustratingly simple transfer
approach that leverages the chain of forgetting tendency. Diff-Tuning
encourages the fine-tuned model to retain the pre-trained knowledge at the end
of the denoising chain close to the generated data while discarding the other
noise side. We conduct comprehensive experiments to evaluate Diff-Tuning,
including the transfer of pre-trained Diffusion Transformer models to eight
downstream generations and the adaptation of Stable Diffusion to five control
conditions with ControlNet. Diff-Tuning achieves a 26% improvement over
standard fine-tuning and enhances the convergence speed of ControlNet by 24%.
Notably, parameter-efficient transfer learning techniques for diffusion models
can also benefit from Diff-Tuning.
|
[
{
"created": "Sun, 2 Jun 2024 15:20:59 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 10:08:22 GMT",
"version": "v2"
}
] |
2024-06-07
|
[
[
"Zhong",
"Jincheng",
""
],
[
"Guo",
"Xingzhuo",
""
],
[
"Dong",
"Jiaxiang",
""
],
[
"Long",
"Mingsheng",
""
]
] |
Diffusion models have significantly advanced the field of generative modeling. However, training a diffusion model is computationally expensive, creating a pressing need to adapt off-the-shelf diffusion models for downstream generation tasks. Current fine-tuning methods focus on parameter-efficient transfer learning but overlook the fundamental transfer characteristics of diffusion models. In this paper, we investigate the transferability of diffusion models and observe a monotonous chain of forgetting trend of transferability along the reverse process. Based on this observation and novel theoretical insights, we present Diff-Tuning, a frustratingly simple transfer approach that leverages the chain of forgetting tendency. Diff-Tuning encourages the fine-tuned model to retain the pre-trained knowledge at the end of the denoising chain close to the generated data while discarding the other noise side. We conduct comprehensive experiments to evaluate Diff-Tuning, including the transfer of pre-trained Diffusion Transformer models to eight downstream generations and the adaptation of Stable Diffusion to five control conditions with ControlNet. Diff-Tuning achieves a 26% improvement over standard fine-tuning and enhances the convergence speed of ControlNet by 24%. Notably, parameter-efficient transfer learning techniques for diffusion models can also benefit from Diff-Tuning.
|
1804.08902
|
Ran Ben Basat
|
Ran Ben Basat, Maayan Goldstein, Itai Segall
|
Learning Software Constraints via Installation Attempts
| null | null | null | null |
cs.SE cs.CR cs.DS cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern software systems are expected to be secure and contain all the latest
features, even when new versions of software are released multiple times an
hour. Each system may include many interacting packages. The problem of
installing multiple dependent packages has been extensively studied in the
past, yielding some promising solutions that work well in practice. However,
these assume that the developers declare all the dependencies and conflicts
between the packages. Oftentimes, the entire repository structure may not be
known upfront, for example when packages are developed by different vendors. In
this paper, we present algorithms for learning dependencies, conflicts and
defective packages from installation attempts. Our algorithms use combinatorial
data structures to generate queries that test installations and discover the
entire dependency structure. A query that the algorithms make corresponds to
trying to install a subset of packages and getting a Boolean feedback on
whether all constraints were satisfied in this subset. Our goal is to minimize
the query complexity of the algorithms. We prove lower and upper bounds on the
number of queries that these algorithms require to make for different settings
of the problem.
|
[
{
"created": "Tue, 24 Apr 2018 08:49:00 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Nov 2018 16:13:19 GMT",
"version": "v2"
}
] |
2018-11-15
|
[
[
"Basat",
"Ran Ben",
""
],
[
"Goldstein",
"Maayan",
""
],
[
"Segall",
"Itai",
""
]
] |
Modern software systems are expected to be secure and contain all the latest features, even when new versions of software are released multiple times an hour. Each system may include many interacting packages. The problem of installing multiple dependent packages has been extensively studied in the past, yielding some promising solutions that work well in practice. However, these assume that the developers declare all the dependencies and conflicts between the packages. Oftentimes, the entire repository structure may not be known upfront, for example when packages are developed by different vendors. In this paper, we present algorithms for learning dependencies, conflicts and defective packages from installation attempts. Our algorithms use combinatorial data structures to generate queries that test installations and discover the entire dependency structure. A query that the algorithms make corresponds to trying to install a subset of packages and getting a Boolean feedback on whether all constraints were satisfied in this subset. Our goal is to minimize the query complexity of the algorithms. We prove lower and upper bounds on the number of queries that these algorithms require to make for different settings of the problem.
|
1603.05800
|
Zhiyun Lu
|
Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May,
Aurelien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, Michael
Picheny, Fei Sha
|
A Comparison between Deep Neural Nets and Kernel Acoustic Models for
Speech Recognition
|
arXiv admin note: text overlap with arXiv:1411.4000
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study large-scale kernel methods for acoustic modeling and compare to DNNs
on performance metrics related to both acoustic modeling and recognition.
Measuring perplexity and frame-level classification accuracy, kernel-based
acoustic models are as effective as their DNN counterparts. However, on
token-error-rates DNN models can be significantly better. We have discovered
that this might be attributed to DNN's unique strength in reducing both the
perplexity and the entropy of the predicted posterior probabilities. Motivated
by our findings, we propose a new technique, entropy regularized perplexity,
for model selection. This technique can noticeably improve the recognition
performance of both types of models, and reduces the gap between them. While
effective on Broadcast News, this technique could be also applicable to other
tasks.
|
[
{
"created": "Fri, 18 Mar 2016 09:16:01 GMT",
"version": "v1"
}
] |
2016-03-21
|
[
[
"Lu",
"Zhiyun",
""
],
[
"Guo",
"Dong",
""
],
[
"Garakani",
"Alireza Bagheri",
""
],
[
"Liu",
"Kuan",
""
],
[
"May",
"Avner",
""
],
[
"Bellet",
"Aurelien",
""
],
[
"Fan",
"Linxi",
""
],
[
"Collins",
"Michael",
""
],
[
"Kingsbury",
"Brian",
""
],
[
"Picheny",
"Michael",
""
],
[
"Sha",
"Fei",
""
]
] |
We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.
|
1209.5430
|
Spyros Sioutas SS
|
Spyros Sioutas, Alexandros Panaretos, Ioannis Karydis, Dimitrios
Tsoumakos, Giannis Tzimas and Dimitrios Tsolis
|
SART: Speeding up Query Processing in Sensor Networks with an Autonomous
Range Tree Structure
|
11 pages, 23 figures, 5 algorithms or operations
|
ACM Applied Computing Review (ACR), Vol. 12, No.3, 2012, pp.60-74
| null | null |
cs.DC cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of constructing efficient P2P overlays for sensornets
providing "Energy-Level Application and Services". The method presented in
\cite{SOPXM09} presents a novel P2P overlay for Energy Level discovery in a
sensornet. However, this solution is not dynamic, since requires periodical
restructuring. In particular, it is not able to support neither join of
sensor\_nodes with energy level out of the ranges supported by the existing p2p
overlay nor leave of \emph{empty} overlay\_peers to which no sensor\_nodes are
currently associated. On this purpose and based on the efficient P2P method
presented in \cite{SPSTMT10}, we design a dynamic P2P overlay for Energy Level
discovery in a sensornet, the so-called SART (Sensors' Autonomous Range Tree).
The adaptation of the P2P index presented in \cite{SPSTMT10} guarantees the
best-known dynamic query performance of the above operation. We experimentally
verify this performance, via the D-P2P-Sim simulator (D-P2P-Sim is publicly
available at http://code.google.com/p/d-p2p-sim/).
|
[
{
"created": "Mon, 24 Sep 2012 21:24:36 GMT",
"version": "v1"
}
] |
2012-09-26
|
[
[
"Sioutas",
"Spyros",
""
],
[
"Panaretos",
"Alexandros",
""
],
[
"Karydis",
"Ioannis",
""
],
[
"Tsoumakos",
"Dimitrios",
""
],
[
"Tzimas",
"Giannis",
""
],
[
"Tsolis",
"Dimitrios",
""
]
] |
We consider the problem of constructing efficient P2P overlays for sensornets providing "Energy-Level Application and Services". The method presented in \cite{SOPXM09} presents a novel P2P overlay for Energy Level discovery in a sensornet. However, this solution is not dynamic, since requires periodical restructuring. In particular, it is not able to support neither join of sensor\_nodes with energy level out of the ranges supported by the existing p2p overlay nor leave of \emph{empty} overlay\_peers to which no sensor\_nodes are currently associated. On this purpose and based on the efficient P2P method presented in \cite{SPSTMT10}, we design a dynamic P2P overlay for Energy Level discovery in a sensornet, the so-called SART (Sensors' Autonomous Range Tree). The adaptation of the P2P index presented in \cite{SPSTMT10} guarantees the best-known dynamic query performance of the above operation. We experimentally verify this performance, via the D-P2P-Sim simulator (D-P2P-Sim is publicly available at http://code.google.com/p/d-p2p-sim/).
|
2306.03090
|
Rose Wang
|
Rose E. Wang, Dorottya Demszky
|
Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For
Scoring and Providing Actionable Insights on Classroom Instruction
|
In the Proceedings of Innovative Use of NLP for Building Educational
Applications 2023; The code and model outputs are open-sourced here:
https://github.com/rosewang2008/zero-shot-teacher-feedback
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Coaching, which involves classroom observation and expert feedback, is a
widespread and fundamental part of teacher training. However, the majority of
teachers do not have access to consistent, high quality coaching due to limited
resources and access to expertise. We explore whether generative AI could
become a cost-effective complement to expert feedback by serving as an
automated teacher coach. In doing so, we propose three teacher coaching tasks
for generative AI: (A) scoring transcript segments based on classroom
observation instruments, (B) identifying highlights and missed opportunities
for good instructional strategies, and (C) providing actionable suggestions for
eliciting more student reasoning. We recruit expert math teachers to evaluate
the zero-shot performance of ChatGPT on each of these tasks for elementary math
classroom transcripts. Our results reveal that ChatGPT generates responses that
are relevant to improving instruction, but they are often not novel or
insightful. For example, 82% of the model's suggestions point to places in the
transcript where the teacher is already implementing that suggestion. Our work
highlights the challenges of producing insightful, novel and truthful feedback
for teachers while paving the way for future research to address these
obstacles and improve the capacity of generative AI to coach teachers.
|
[
{
"created": "Mon, 5 Jun 2023 17:59:21 GMT",
"version": "v1"
}
] |
2023-06-06
|
[
[
"Wang",
"Rose E.",
""
],
[
"Demszky",
"Dorottya",
""
]
] |
Coaching, which involves classroom observation and expert feedback, is a widespread and fundamental part of teacher training. However, the majority of teachers do not have access to consistent, high quality coaching due to limited resources and access to expertise. We explore whether generative AI could become a cost-effective complement to expert feedback by serving as an automated teacher coach. In doing so, we propose three teacher coaching tasks for generative AI: (A) scoring transcript segments based on classroom observation instruments, (B) identifying highlights and missed opportunities for good instructional strategies, and (C) providing actionable suggestions for eliciting more student reasoning. We recruit expert math teachers to evaluate the zero-shot performance of ChatGPT on each of these tasks for elementary math classroom transcripts. Our results reveal that ChatGPT generates responses that are relevant to improving instruction, but they are often not novel or insightful. For example, 82% of the model's suggestions point to places in the transcript where the teacher is already implementing that suggestion. Our work highlights the challenges of producing insightful, novel and truthful feedback for teachers while paving the way for future research to address these obstacles and improve the capacity of generative AI to coach teachers.
|
1811.02657
|
Tan Nguyen
|
Tan Nguyen, Nhat Ho, Ankit Patel, Anima Anandkumar, Michael I. Jordan,
Richard G. Baraniuk
|
A Bayesian Perspective of Convolutional Neural Networks through a
Deconvolutional Generative Model
|
Keywords: neural nets, generative models, semi-supervised learning,
cross-entropy, statistical guarantees 80 pages, 7 figures, 8 tables
| null | null | null |
cs.CV cs.AI cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the success of Convolutional Neural Networks (CNNs) for
supervised prediction in images, we design the Deconvolutional Generative Model
(DGM), a new probabilistic generative model whose inference calculations
correspond to those in a given CNN architecture. The DGM uses a CNN to design
the prior distribution in the probabilistic model. Furthermore, the DGM
generates images from coarse to finer scales. It introduces a small set of
latent variables at each scale, and enforces dependencies among all the latent
variables via a conjugate prior distribution. This conjugate prior yields a new
regularizer based on paths rendered in the generative model for training
CNNs-the Rendering Path Normalization (RPN). We demonstrate that this
regularizer improves generalization, both in theory and in practice. In
addition, likelihood estimation in the DGM yields training losses for CNNs, and
inspired by this, we design a new loss termed as the Max-Min cross entropy
which outperforms the traditional cross-entropy loss for object classification.
The Max-Min cross entropy suggests a new deep network architecture, namely the
Max-Min network, which can learn from less labeled data while maintaining good
prediction performance. Our experiments demonstrate that the DGM with the RPN
and the Max-Min architecture exceeds or matches the-state-of-art on benchmarks
including SVHN, CIFAR10, and CIFAR100 for semi-supervised and supervised
learning tasks.
|
[
{
"created": "Thu, 1 Nov 2018 01:27:37 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Dec 2019 10:21:21 GMT",
"version": "v2"
}
] |
2019-12-10
|
[
[
"Nguyen",
"Tan",
""
],
[
"Ho",
"Nhat",
""
],
[
"Patel",
"Ankit",
""
],
[
"Anandkumar",
"Anima",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] |
Inspired by the success of Convolutional Neural Networks (CNNs) for supervised prediction in images, we design the Deconvolutional Generative Model (DGM), a new probabilistic generative model whose inference calculations correspond to those in a given CNN architecture. The DGM uses a CNN to design the prior distribution in the probabilistic model. Furthermore, the DGM generates images from coarse to finer scales. It introduces a small set of latent variables at each scale, and enforces dependencies among all the latent variables via a conjugate prior distribution. This conjugate prior yields a new regularizer based on paths rendered in the generative model for training CNNs-the Rendering Path Normalization (RPN). We demonstrate that this regularizer improves generalization, both in theory and in practice. In addition, likelihood estimation in the DGM yields training losses for CNNs, and inspired by this, we design a new loss termed as the Max-Min cross entropy which outperforms the traditional cross-entropy loss for object classification. The Max-Min cross entropy suggests a new deep network architecture, namely the Max-Min network, which can learn from less labeled data while maintaining good prediction performance. Our experiments demonstrate that the DGM with the RPN and the Max-Min architecture exceeds or matches the-state-of-art on benchmarks including SVHN, CIFAR10, and CIFAR100 for semi-supervised and supervised learning tasks.
|
2102.11263
|
Kripasindhu Sarkar
|
Kripasindhu Sarkar and Vladislav Golyanik and Lingjie Liu and
Christian Theobalt
|
Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photo-realistic re-rendering of a human from a single image with explicit
control over body pose, shape and appearance enables a wide range of
applications, such as human appearance transfer, virtual try-on, motion
imitation, and novel view synthesis. While significant progress has been made
in this direction using learning-based image generation tools, such as GANs,
existing approaches yield noticeable artefacts such as blurring of fine
details, unrealistic distortions of the body parts and garments as well as
severe changes of the textures. We, therefore, propose a new method for
synthesising photo-realistic human images with explicit control over pose and
part-based appearance, i.e., StylePoseGAN, where we extend a non-controllable
generator to accept conditioning of pose and appearance separately. Our network
can be trained in a fully supervised way with human images to disentangle pose,
appearance and body parts, and it significantly outperforms existing single
image re-rendering methods. Our disentangled representation opens up further
applications such as garment transfer, motion transfer, virtual try-on, head
(identity) swap and appearance interpolation. StylePoseGAN achieves
state-of-the-art image generation fidelity on common perceptual metrics
compared to the current best-performing methods and convinces in a
comprehensive user study.
|
[
{
"created": "Mon, 22 Feb 2021 18:50:47 GMT",
"version": "v1"
}
] |
2021-02-23
|
[
[
"Sarkar",
"Kripasindhu",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Theobalt",
"Christian",
""
]
] |
Photo-realistic re-rendering of a human from a single image with explicit control over body pose, shape and appearance enables a wide range of applications, such as human appearance transfer, virtual try-on, motion imitation, and novel view synthesis. While significant progress has been made in this direction using learning-based image generation tools, such as GANs, existing approaches yield noticeable artefacts such as blurring of fine details, unrealistic distortions of the body parts and garments as well as severe changes of the textures. We, therefore, propose a new method for synthesising photo-realistic human images with explicit control over pose and part-based appearance, i.e., StylePoseGAN, where we extend a non-controllable generator to accept conditioning of pose and appearance separately. Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts, and it significantly outperforms existing single image re-rendering methods. Our disentangled representation opens up further applications such as garment transfer, motion transfer, virtual try-on, head (identity) swap and appearance interpolation. StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics compared to the current best-performing methods and convinces in a comprehensive user study.
|
2210.17471
|
Kenneth Mayer
|
Kenneth MacSporran Mayer, Laura Cottatellucci, Robert Schober
|
Optimal Antenna Placement for Two-Antenna Near-Field Wireless Power
Transfer
|
7 pages, 3 figures, six page version of this paper has been submitted
to IEEE ICC 2023
| null |
10.1109/ICC45041.2023.10278773
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current trends in communication system design precipitate a change in the
operating regime from the traditional far-field to the radiating near-field
(Fresnel) region. We investigate the optimal transmit antenna placement for a
multiple-input single-output (MISO) wireless power transfer (WPT) system
designed for a three-dimensional cuboid room under line-of-sight (LoS)
conditions in the Fresnel region. We formulate an optimisation problem for
maximising the received power at the worst possible receiver location by
considering the spherical nature of the electromagnetic (EM) wavefronts in the
Fresnel region while assuming perfect knowledge of the channel at the
transmitter. For the case of two transmit antennas, we derive a closed-form
expression for the optimal positioning of the antennas which is purely
determined by the geometry of the environment. If the room contains locations
where the far-field approximation holds, the proposed positioning is shown to
reduce to the far-field solution. The analytical solution is validated through
simulation. Furthermore, the maximum received power at the locations yielding
the worst performance is quantified and the power gain over the optimal
far-field solution is presented. For the considered cuboid environment, we show
that a distributed antenna system is optimal in the Fresnel region, whereas a
co-located antenna architecture is ideal for the far-field.
|
[
{
"created": "Mon, 31 Oct 2022 16:56:33 GMT",
"version": "v1"
}
] |
2023-11-06
|
[
[
"Mayer",
"Kenneth MacSporran",
""
],
[
"Cottatellucci",
"Laura",
""
],
[
"Schober",
"Robert",
""
]
] |
Current trends in communication system design precipitate a change in the operating regime from the traditional far-field to the radiating near-field (Fresnel) region. We investigate the optimal transmit antenna placement for a multiple-input single-output (MISO) wireless power transfer (WPT) system designed for a three-dimensional cuboid room under line-of-sight (LoS) conditions in the Fresnel region. We formulate an optimisation problem for maximising the received power at the worst possible receiver location by considering the spherical nature of the electromagnetic (EM) wavefronts in the Fresnel region while assuming perfect knowledge of the channel at the transmitter. For the case of two transmit antennas, we derive a closed-form expression for the optimal positioning of the antennas which is purely determined by the geometry of the environment. If the room contains locations where the far-field approximation holds, the proposed positioning is shown to reduce to the far-field solution. The analytical solution is validated through simulation. Furthermore, the maximum received power at the locations yielding the worst performance is quantified and the power gain over the optimal far-field solution is presented. For the considered cuboid environment, we show that a distributed antenna system is optimal in the Fresnel region, whereas a co-located antenna architecture is ideal for the far-field.
|
2304.12512
|
Michael Sandborn
|
Henry Gilbert, Michael Sandborn, Douglas C. Schmidt, Jesse
Spencer-Smith, Jules White
|
Semantic Compression With Large Language Models
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of large language models (LLMs) is revolutionizing information
retrieval, question answering, summarization, and code generation tasks.
However, in addition to confidently presenting factually inaccurate information
at times (known as "hallucinations"), LLMs are also inherently limited by the
number of input and output tokens that can be processed at once, making them
potentially less effective on tasks that require processing a large set or
continuous stream of information. A common approach to reducing the size of
data is through lossless or lossy compression. Yet, in some cases it may not be
strictly necessary to perfectly recover every detail from the original data, as
long as a requisite level of semantic precision or intent is conveyed.
This paper presents three contributions to research on LLMs. First, we
present the results from experiments exploring the viability of approximate
compression using LLMs, focusing specifically on GPT-3.5 and GPT-4 via ChatGPT
interfaces. Second, we investigate and quantify the capability of LLMs to
compress text and code, as well as to recall and manipulate compressed
representations of prompts. Third, we present two novel metrics -- Exact
Reconstructive Effectiveness (ERE) and Semantic Reconstruction Effectiveness
(SRE) -- that quantify the level of preserved intent between text compressed
and decompressed by the LLMs we studied. Our initial results indicate that
GPT-4 can effectively compress and reconstruct text while preserving the
semantic essence of the original text, providing a path to leverage
$\sim$5$\times$ more tokens than present limits allow.
|
[
{
"created": "Tue, 25 Apr 2023 01:47:05 GMT",
"version": "v1"
}
] |
2023-04-26
|
[
[
"Gilbert",
"Henry",
""
],
[
"Sandborn",
"Michael",
""
],
[
"Schmidt",
"Douglas C.",
""
],
[
"Spencer-Smith",
"Jesse",
""
],
[
"White",
"Jules",
""
]
] |
The rise of large language models (LLMs) is revolutionizing information retrieval, question answering, summarization, and code generation tasks. However, in addition to confidently presenting factually inaccurate information at times (known as "hallucinations"), LLMs are also inherently limited by the number of input and output tokens that can be processed at once, making them potentially less effective on tasks that require processing a large set or continuous stream of information. A common approach to reducing the size of data is through lossless or lossy compression. Yet, in some cases it may not be strictly necessary to perfectly recover every detail from the original data, as long as a requisite level of semantic precision or intent is conveyed. This paper presents three contributions to research on LLMs. First, we present the results from experiments exploring the viability of approximate compression using LLMs, focusing specifically on GPT-3.5 and GPT-4 via ChatGPT interfaces. Second, we investigate and quantify the capability of LLMs to compress text and code, as well as to recall and manipulate compressed representations of prompts. Third, we present two novel metrics -- Exact Reconstructive Effectiveness (ERE) and Semantic Reconstruction Effectiveness (SRE) -- that quantify the level of preserved intent between text compressed and decompressed by the LLMs we studied. Our initial results indicate that GPT-4 can effectively compress and reconstruct text while preserving the semantic essence of the original text, providing a path to leverage $\sim$5$\times$ more tokens than present limits allow.
|
2008.11995
|
Amelie Royer
|
Amelie Royer and Christoph H. Lampert
|
A Flexible Selection Scheme for Minimum-Effort Transfer Learning
|
WACV 2020
| null |
10.1109/WACV45572.2020.9093635
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-tuning is a popular way of exploiting knowledge contained in a
pre-trained convolutional network for a new visual recognition task. However,
the orthogonal setting of transferring knowledge from a pretrained network to a
visually different yet semantically close source is rarely considered: This
commonly happens with real-life data, which is not necessarily as clean as the
training source (noise, geometric transformations, different modalities, etc.).
To tackle such scenarios, we introduce a new, generalized form of fine-tuning,
called flex-tuning, in which any individual unit (e.g. layer) of a network can
be tuned, and the most promising one is chosen automatically. In order to make
the method appealing for practical use, we propose two lightweight and faster
selection procedures that prove to be good approximations in practice. We study
these selection criteria empirically across a variety of domain shifts and data
scarcity scenarios, and show that fine-tuning individual units, despite its
simplicity, yields very good results as an adaptation technique. As it turns
out, in contrast to common practice, rather than the last fully-connected unit
it is best to tune an intermediate or early one in many domain-shift scenarios,
which is accurately detected by flex-tuning.
|
[
{
"created": "Thu, 27 Aug 2020 08:57:30 GMT",
"version": "v1"
}
] |
2020-08-28
|
[
[
"Royer",
"Amelie",
""
],
[
"Lampert",
"Christoph H.",
""
]
] |
Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained convolutional network for a new visual recognition task. However, the orthogonal setting of transferring knowledge from a pretrained network to a visually different yet semantically close source is rarely considered: This commonly happens with real-life data, which is not necessarily as clean as the training source (noise, geometric transformations, different modalities, etc.). To tackle such scenarios, we introduce a new, generalized form of fine-tuning, called flex-tuning, in which any individual unit (e.g. layer) of a network can be tuned, and the most promising one is chosen automatically. In order to make the method appealing for practical use, we propose two lightweight and faster selection procedures that prove to be good approximations in practice. We study these selection criteria empirically across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning individual units, despite its simplicity, yields very good results as an adaptation technique. As it turns out, in contrast to common practice, rather than the last fully-connected unit it is best to tune an intermediate or early one in many domain-shift scenarios, which is accurately detected by flex-tuning.
|
1706.03102
|
Solon Barocas
|
Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost
|
Big Data, Data Science, and Civil Rights
|
A Computing Community Consortium (CCC) white paper, 8 pages
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in data analytics bring with them civil rights implications.
Data-driven and algorithmic decision making increasingly determine how
businesses target advertisements to consumers, how police departments monitor
individuals or groups, how banks decide who gets a loan and who does not, how
employers hire, how colleges and universities make admissions and financial aid
decisions, and much more. As data-driven decisions increasingly affect every
corner of our lives, there is an urgent need to ensure they do not become
instruments of discrimination, barriers to equality, threats to social justice,
and sources of unfairness. In this paper, we argue for a concrete research
agenda aimed at addressing these concerns, comprising five areas of emphasis:
(i) Determining if models and modeling procedures exhibit objectionable bias;
(ii) Building awareness of fairness into machine learning methods; (iii)
Improving the transparency and control of data- and model-driven decision
making; (iv) Looking beyond the algorithm(s) for sources of bias and
unfairness-in the myriad human decisions made during the problem formulation
and modeling process; and (v) Supporting the cross-disciplinary scholarship
necessary to do all of that well.
|
[
{
"created": "Fri, 9 Jun 2017 19:45:28 GMT",
"version": "v1"
}
] |
2017-06-13
|
[
[
"Barocas",
"Solon",
""
],
[
"Bradley",
"Elizabeth",
""
],
[
"Honavar",
"Vasant",
""
],
[
"Provost",
"Foster",
""
]
] |
Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness-in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well.
|
2101.06286
|
M. Mehdi Afsar
|
M. Mehdi Afsar, Trafford Crump, Behrouz Far
|
Reinforcement learning based recommender systems: A survey
|
To appear in ACM Computing Surveys
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems (RSs) have become an inseparable part of our everyday
lives. They help us find our favorite items to purchase, our friends on social
networks, and our favorite movies to watch. Traditionally, the recommendation
problem was considered to be a classification or prediction problem, but it is
now widely agreed that formulating it as a sequential decision problem can
better reflect the user-system interaction. Therefore, it can be formulated as
a Markov decision process (MDP) and be solved by reinforcement learning (RL)
algorithms. Unlike traditional recommendation methods, including collaborative
filtering and content-based filtering, RL is able to handle the sequential,
dynamic user-system interaction and to take into account the long-term user
engagement. Although the idea of using RL for recommendation is not new and has
been around for about two decades, it was not very practical, mainly because of
scalability problems of traditional RL algorithms. However, a new trend has
emerged in the field since the introduction of deep reinforcement learning
(DRL), which made it possible to apply RL to the recommendation problem with
large state and action spaces. In this paper, a survey on reinforcement
learning based recommender systems (RLRSs) is presented. Our aim is to present
an outlook on the field and to provide the reader with a fairly complete
knowledge of key concepts of the field. We first recognize and illustrate that
RLRSs can be generally classified into RL- and DRL-based methods. Then, we
propose an RLRS framework with four components, i.e., state representation,
policy optimization, reward formulation, and environment building, and survey
RLRS algorithms accordingly. We highlight emerging topics and depict important
trends using various graphs and tables. Finally, we discuss important aspects
and challenges that can be addressed in the future.
|
[
{
"created": "Fri, 15 Jan 2021 19:42:10 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jun 2022 05:25:37 GMT",
"version": "v2"
}
] |
2022-06-09
|
[
[
"Afsar",
"M. Mehdi",
""
],
[
"Crump",
"Trafford",
""
],
[
"Far",
"Behrouz",
""
]
] |
Recommender systems (RSs) have become an inseparable part of our everyday lives. They help us find our favorite items to purchase, our friends on social networks, and our favorite movies to watch. Traditionally, the recommendation problem was considered to be a classification or prediction problem, but it is now widely agreed that formulating it as a sequential decision problem can better reflect the user-system interaction. Therefore, it can be formulated as a Markov decision process (MDP) and be solved by reinforcement learning (RL) algorithms. Unlike traditional recommendation methods, including collaborative filtering and content-based filtering, RL is able to handle the sequential, dynamic user-system interaction and to take into account the long-term user engagement. Although the idea of using RL for recommendation is not new and has been around for about two decades, it was not very practical, mainly because of scalability problems of traditional RL algorithms. However, a new trend has emerged in the field since the introduction of deep reinforcement learning (DRL), which made it possible to apply RL to the recommendation problem with large state and action spaces. In this paper, a survey on reinforcement learning based recommender systems (RLRSs) is presented. Our aim is to present an outlook on the field and to provide the reader with a fairly complete knowledge of key concepts of the field. We first recognize and illustrate that RLRSs can be generally classified into RL- and DRL-based methods. Then, we propose an RLRS framework with four components, i.e., state representation, policy optimization, reward formulation, and environment building, and survey RLRS algorithms accordingly. We highlight emerging topics and depict important trends using various graphs and tables. Finally, we discuss important aspects and challenges that can be addressed in the future.
|
1911.03310
|
Jind\v{r}ich Libovick\'y
|
Jind\v{r}ich Libovick\'y and Rudolf Rosa and Alexander Fraser
|
How Language-Neutral is Multilingual BERT?
|
6 pages, 3 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilingual BERT (mBERT) provides sentence representations for 104
languages, which are useful for many multi-lingual tasks. Previous work probed
the cross-linguality of mBERT using zero-shot transfer learning on
morphological and syntactic tasks. We instead focus on the semantic properties
of mBERT. We show that mBERT representations can be split into a
language-specific component and a language-neutral component, and that the
language-neutral component is sufficiently general in terms of modeling
semantics to allow high-accuracy word-alignment and sentence retrieval but is
not yet good enough for the more difficult task of MT quality estimation. Our
work presents interesting challenges which must be solved to build better
language-neutral representations, particularly for tasks requiring linguistic
transfer of semantics.
|
[
{
"created": "Fri, 8 Nov 2019 15:12:36 GMT",
"version": "v1"
}
] |
2019-11-11
|
[
[
"Libovický",
"Jindřich",
""
],
[
"Rosa",
"Rudolf",
""
],
[
"Fraser",
"Alexander",
""
]
] |
Multilingual BERT (mBERT) provides sentence representations for 104 languages, which are useful for many multi-lingual tasks. Previous work probed the cross-linguality of mBERT using zero-shot transfer learning on morphological and syntactic tasks. We instead focus on the semantic properties of mBERT. We show that mBERT representations can be split into a language-specific component and a language-neutral component, and that the language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment and sentence retrieval but is not yet good enough for the more difficult task of MT quality estimation. Our work presents interesting challenges which must be solved to build better language-neutral representations, particularly for tasks requiring linguistic transfer of semantics.
|
1905.08348
|
Wenjie Xiong
|
Wenjie Xiong and Jakub Szefer
|
Leaking Information Through Cache LRU States
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Least-Recently Used cache replacement policy and its variants are widely
deployed in modern processors. This paper shows for the first time in detail
that the LRU states of caches can be used to leak information: any access to a
cache by a sender will modify the LRU state, and the receiver is able to
observe this through a timing measurement. This paper presents LRU timing-based
channels both when the sender and the receiver have shared memory, e.g., shared
library data pages, and when they are separate processes without shared memory.
In addition, the new LRU timing-based channels are demonstrated on both Intel
and AMD processors in scenarios where the sender and the receiver are sharing
the cache in both hyper-threaded setting and time-sliced setting. The
transmission rate of the LRU channels can be up to 600Kbps per cache set in the
hyper-threaded setting. Different from the majority of existing cache channels
which require the sender to trigger cache misses, the new LRU channels work
with the sender only having cache hits, making the channel faster and more
stealthy. This paper also demonstrates that the new LRU channels can be used in
transient execution attacks, e.g., Spectre. Further, this paper shows that the
LRU channels pose threats to existing secure cache designs, and this work
demonstrates the LRU channels affect the secure PL cache. The paper finishes by
discussing and evaluating possible defenses.
|
[
{
"created": "Mon, 20 May 2019 21:11:13 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jan 2020 04:15:48 GMT",
"version": "v2"
}
] |
2020-01-06
|
[
[
"Xiong",
"Wenjie",
""
],
[
"Szefer",
"Jakub",
""
]
] |
The Least-Recently Used cache replacement policy and its variants are widely deployed in modern processors. This paper shows for the first time in detail that the LRU states of caches can be used to leak information: any access to a cache by a sender will modify the LRU state, and the receiver is able to observe this through a timing measurement. This paper presents LRU timing-based channels both when the sender and the receiver have shared memory, e.g., shared library data pages, and when they are separate processes without shared memory. In addition, the new LRU timing-based channels are demonstrated on both Intel and AMD processors in scenarios where the sender and the receiver are sharing the cache in both hyper-threaded setting and time-sliced setting. The transmission rate of the LRU channels can be up to 600Kbps per cache set in the hyper-threaded setting. Different from the majority of existing cache channels which require the sender to trigger cache misses, the new LRU channels work with the sender only having cache hits, making the channel faster and more stealthy. This paper also demonstrates that the new LRU channels can be used in transient execution attacks, e.g., Spectre. Further, this paper shows that the LRU channels pose threats to existing secure cache designs, and this work demonstrates the LRU channels affect the secure PL cache. The paper finishes by discussing and evaluating possible defenses.
|
2407.16990
|
Liang Mi
|
Weijun Wang, Liang Mi, Shaowei Cen, Haipeng Dai, Yuanchun Li, Xiaoming
Fu, Yunxin Liu
|
Region-based Content Enhancement for Efficient Video Analytics at the
Edge
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Video analytics is widespread in various applications serving our society.
Recent advances of content enhancement in video analytics offer significant
benefits for the bandwidth saving and accuracy improvement. However, existing
content-enhanced video analytics systems are excessively computationally
expensive and provide extremely low throughput. In this paper, we present
region-based content enhancement, that enhances only the important regions in
videos, to improve analytical accuracy. Our system, RegenHance, enables
high-accuracy and high-throughput video analytics at the edge by 1) a
macroblock-based region importance predictor that identifies the important
regions fast and precisely, 2) a region-aware enhancer that stitches sparsely
distributed regions into dense tensors and enhances them efficiently, and 3) a
profile-based execution planer that allocates appropriate resources for
enhancement and analytics components. We prototype RegenHance on five
heterogeneous edge devices. Experiments on two analytical tasks reveal that
region-based enhancement improves the overall accuracy of 10-19% and achieves
2-3x throughput compared to the state-of-the-art frame-based enhancement
methods.
|
[
{
"created": "Wed, 24 Jul 2024 04:17:32 GMT",
"version": "v1"
}
] |
2024-07-25
|
[
[
"Wang",
"Weijun",
""
],
[
"Mi",
"Liang",
""
],
[
"Cen",
"Shaowei",
""
],
[
"Dai",
"Haipeng",
""
],
[
"Li",
"Yuanchun",
""
],
[
"Fu",
"Xiaoming",
""
],
[
"Liu",
"Yunxin",
""
]
] |
Video analytics is widespread in various applications serving our society. Recent advances of content enhancement in video analytics offer significant benefits for the bandwidth saving and accuracy improvement. However, existing content-enhanced video analytics systems are excessively computationally expensive and provide extremely low throughput. In this paper, we present region-based content enhancement, that enhances only the important regions in videos, to improve analytical accuracy. Our system, RegenHance, enables high-accuracy and high-throughput video analytics at the edge by 1) a macroblock-based region importance predictor that identifies the important regions fast and precisely, 2) a region-aware enhancer that stitches sparsely distributed regions into dense tensors and enhances them efficiently, and 3) a profile-based execution planer that allocates appropriate resources for enhancement and analytics components. We prototype RegenHance on five heterogeneous edge devices. Experiments on two analytical tasks reveal that region-based enhancement improves the overall accuracy of 10-19% and achieves 2-3x throughput compared to the state-of-the-art frame-based enhancement methods.
|
1709.01148
|
Yizhe Zhu
|
Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, Ahmed Elgammal
|
Link the head to the "beak": Zero Shot Learning from Noisy Text
Description at Part Precision
|
Accepted by CVPR'17
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study learning visual classifiers from unstructured text
descriptions at part precision with no training images. We propose a learning
framework that is able to connect text terms to its relevant parts and suppress
connections to non-visual text terms without any part-text annotations. For
instance, this learning process enables terms like "beak" to be sparsely linked
to the visual representation of parts like head, while reduces the effect of
non-visual terms like "migrate" on classifier prediction. Images are encoded by
a part-based CNN that detect bird parts and learn part-specific representation.
Part-based visual classifiers are predicted from text descriptions of unseen
visual classifiers to facilitate classification without training images (also
known as zero-shot recognition). We performed our experiments on CUBirds 2011
dataset and improves the state-of-the-art text-based zero-shot recognition
results from 34.7\% to 43.6\%. We also created large scale benchmarks on North
American Bird Images augmented with text descriptions, where we also show that
our approach outperforms existing methods. Our code, data, and models are
publically available.
|
[
{
"created": "Mon, 4 Sep 2017 20:36:14 GMT",
"version": "v1"
}
] |
2017-09-06
|
[
[
"Elhoseiny",
"Mohamed",
""
],
[
"Zhu",
"Yizhe",
""
],
[
"Zhang",
"Han",
""
],
[
"Elgammal",
"Ahmed",
""
]
] |
In this paper, we study learning visual classifiers from unstructured text descriptions at part precision with no training images. We propose a learning framework that is able to connect text terms to its relevant parts and suppress connections to non-visual text terms without any part-text annotations. For instance, this learning process enables terms like "beak" to be sparsely linked to the visual representation of parts like head, while reduces the effect of non-visual terms like "migrate" on classifier prediction. Images are encoded by a part-based CNN that detect bird parts and learn part-specific representation. Part-based visual classifiers are predicted from text descriptions of unseen visual classifiers to facilitate classification without training images (also known as zero-shot recognition). We performed our experiments on CUBirds 2011 dataset and improves the state-of-the-art text-based zero-shot recognition results from 34.7\% to 43.6\%. We also created large scale benchmarks on North American Bird Images augmented with text descriptions, where we also show that our approach outperforms existing methods. Our code, data, and models are publically available.
|
1511.03690
|
David Harwath
|
David Harwath and James Glass
|
Deep Multimodal Semantic Embeddings for Speech and Images
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a model which takes as input a corpus of images
with relevant spoken captions and finds a correspondence between the two
modalities. We employ a pair of convolutional neural networks to model visual
objects and speech signals at the word level, and tie the networks together
with an embedding and alignment model which learns a joint semantic space over
both modalities. We evaluate our model using image search and annotation tasks
on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000
spoken captions using Amazon Mechanical Turk.
|
[
{
"created": "Wed, 11 Nov 2015 21:30:10 GMT",
"version": "v1"
}
] |
2015-11-13
|
[
[
"Harwath",
"David",
""
],
[
"Glass",
"James",
""
]
] |
In this paper, we present a model which takes as input a corpus of images with relevant spoken captions and finds a correspondence between the two modalities. We employ a pair of convolutional neural networks to model visual objects and speech signals at the word level, and tie the networks together with an embedding and alignment model which learns a joint semantic space over both modalities. We evaluate our model using image search and annotation tasks on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000 spoken captions using Amazon Mechanical Turk.
|
2207.14659
|
Gonzalo Jes\'us Paz Delgado
|
G.J. Paz-Delgado, C.J. P\'erez-del-Pulgar, M. Azkarate, F. Kirchner
and A. Garc\'ia-Cerezo
|
Multi-stage warm started optimal motion planning for over-actuated
mobile platforms
| null | null |
10.1007/s11370-023-00461-x
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents a computationally lightweight motion planner for
over-actuated platforms. For this purpose, a general state-space model for
mobile platforms with several kinematic chains is defined, which considers
non-linearities and constraints. The proposed motion planner is based on a
sequential multi-stage approach that takes advantage of the warm start on each
step. Firstly, a globally optimal and smooth 2D/3D trajectory is generated
using the Fast Marching Method. This trajectory is fed as a warm start to a
sequential linear quadratic regulator that is able to generate an optimal
motion plan without constraints for all the platform actuators. Finally, a
feasible motion plan is generated considering the constraints defined in the
model. In this respect, the sequential linear quadratic regulator is employed
again, taking the previously generated unconstrained motion plan as a warm
start. This novel approach has been deployed into the Exomars Testing Rover of
the European Space Agency. This rover is an Ackermann-capable planetary
exploration testbed that is equipped with a robotic arm. Several experiments
were carried out demonstrating that the proposed approach speeds up the
computation time, increasing the success ratio for a martian sample retrieval
mission, which can be considered as a representative use case of an
over-actuated mobile platform.
|
[
{
"created": "Fri, 29 Jul 2022 13:05:45 GMT",
"version": "v1"
}
] |
2023-04-26
|
[
[
"Paz-Delgado",
"G. J.",
""
],
[
"Pérez-del-Pulgar",
"C. J.",
""
],
[
"Azkarate",
"M.",
""
],
[
"Kirchner",
"F.",
""
],
[
"García-Cerezo",
"A.",
""
]
] |
This work presents a computationally lightweight motion planner for over-actuated platforms. For this purpose, a general state-space model for mobile platforms with several kinematic chains is defined, which considers non-linearities and constraints. The proposed motion planner is based on a sequential multi-stage approach that takes advantage of the warm start on each step. Firstly, a globally optimal and smooth 2D/3D trajectory is generated using the Fast Marching Method. This trajectory is fed as a warm start to a sequential linear quadratic regulator that is able to generate an optimal motion plan without constraints for all the platform actuators. Finally, a feasible motion plan is generated considering the constraints defined in the model. In this respect, the sequential linear quadratic regulator is employed again, taking the previously generated unconstrained motion plan as a warm start. This novel approach has been deployed into the Exomars Testing Rover of the European Space Agency. This rover is an Ackermann-capable planetary exploration testbed that is equipped with a robotic arm. Several experiments were carried out demonstrating that the proposed approach speeds up the computation time, increasing the success ratio for a martian sample retrieval mission, which can be considered as a representative use case of an over-actuated mobile platform.
|
1910.07883
|
Matthias Niedermaier
|
Matthias Niedermaier and Florian Fischer and Alexander von Bodisco
|
PropFuzz -- An IT-Security Fuzzing Framework for Proprietary ICS
Protocols
|
2017 International Conference on Applied Electronics (AE)
| null |
10.23919/AE.2017.8053600
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programmable Logic Controllers are used for smart homes, in production
processes or to control critical infrastructures. Modern industrial devices in
the control level are often communicating over proprietary protocols on top of
TCP/IP with each other and SCADA systems. The networks in which the controllers
operate are usually considered as trustworthy and thereby they are not properly
secured. Due to the growing connectivity caused by the Internet of Things (IoT)
and Industry 4.0 the security risks are rising. Therefore, the demand of
security assessment tools for industrial networks is high. In this paper, we
introduce a new fuzzing framework called PropFuzz, which is capable to fuzz
proprietary industrial control system protocols and monitor the behavior of the
controller. Furthermore, we present first results of a security assessment with
our framework.
|
[
{
"created": "Thu, 17 Oct 2019 13:20:10 GMT",
"version": "v1"
}
] |
2019-10-18
|
[
[
"Niedermaier",
"Matthias",
""
],
[
"Fischer",
"Florian",
""
],
[
"von Bodisco",
"Alexander",
""
]
] |
Programmable Logic Controllers are used for smart homes, in production processes or to control critical infrastructures. Modern industrial devices in the control level are often communicating over proprietary protocols on top of TCP/IP with each other and SCADA systems. The networks in which the controllers operate are usually considered as trustworthy and thereby they are not properly secured. Due to the growing connectivity caused by the Internet of Things (IoT) and Industry 4.0 the security risks are rising. Therefore, the demand of security assessment tools for industrial networks is high. In this paper, we introduce a new fuzzing framework called PropFuzz, which is capable to fuzz proprietary industrial control system protocols and monitor the behavior of the controller. Furthermore, we present first results of a security assessment with our framework.
|
2405.14307
|
Weigang Lu
|
Weigang Lu, Ziyu Guan, Wei Zhao, and Yaming Yang
|
AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation
|
Accepted by KDD 2024
|
KDD 2024
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have revolutionized graph-based machine
learning, but their heavy computational demands pose challenges for
latency-sensitive edge devices in practical industrial applications. In
response, a new wave of methods, collectively known as GNN-to-MLP Knowledge
Distillation, has emerged. They aim to transfer GNN-learned knowledge to a more
efficient MLP student, which offers faster, resource-efficient inference while
maintaining competitive performance compared to GNNs. However, these methods
face significant challenges in situations with insufficient training data and
incomplete test data, limiting their applicability in real-world applications.
To address these challenges, we propose AdaGMLP, an AdaBoosting GNN-to-MLP
Knowledge Distillation framework. It leverages an ensemble of diverse MLP
students trained on different subsets of labeled nodes, addressing the issue of
insufficient training data. Additionally, it incorporates a Node Alignment
technique for robust predictions on test data with missing or incomplete
features. Our experiments on seven benchmark datasets with different settings
demonstrate that AdaGMLP outperforms existing G2M methods, making it suitable
for a wide range of latency-sensitive real-world applications. We have
submitted our code to the GitHub repository
(https://github.com/WeigangLu/AdaGMLP-KDD24).
|
[
{
"created": "Thu, 23 May 2024 08:28:44 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Lu",
"Weigang",
""
],
[
"Guan",
"Ziyu",
""
],
[
"Zhao",
"Wei",
""
],
[
"Yang",
"Yaming",
""
]
] |
Graph Neural Networks (GNNs) have revolutionized graph-based machine learning, but their heavy computational demands pose challenges for latency-sensitive edge devices in practical industrial applications. In response, a new wave of methods, collectively known as GNN-to-MLP Knowledge Distillation, has emerged. They aim to transfer GNN-learned knowledge to a more efficient MLP student, which offers faster, resource-efficient inference while maintaining competitive performance compared to GNNs. However, these methods face significant challenges in situations with insufficient training data and incomplete test data, limiting their applicability in real-world applications. To address these challenges, we propose AdaGMLP, an AdaBoosting GNN-to-MLP Knowledge Distillation framework. It leverages an ensemble of diverse MLP students trained on different subsets of labeled nodes, addressing the issue of insufficient training data. Additionally, it incorporates a Node Alignment technique for robust predictions on test data with missing or incomplete features. Our experiments on seven benchmark datasets with different settings demonstrate that AdaGMLP outperforms existing G2M methods, making it suitable for a wide range of latency-sensitive real-world applications. We have submitted our code to the GitHub repository (https://github.com/WeigangLu/AdaGMLP-KDD24).
|
1401.6022
|
Rishab Nithyanand
|
Xiang Cai, Rishab Nithyanand, and Rob Johnson
|
New Approaches to Website Fingerprinting Defenses
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Website fingerprinting attacks enable an adversary to infer which website a
victim is visiting, even if the victim uses an encrypting proxy, such as Tor.
Previous work has shown that all proposed defenses against website
fingerprinting attacks are ineffective.
This paper advances the study of website fingerprinting attacks and defenses
in two ways. First, we develop bounds on the trade-off between security and
bandwidth overhead that any fingerprinting defense scheme can achieve. This
enables us to compare schemes with different security/overhead trade-offs by
comparing how close they are to the lower bound. We then refine, implement, and
evaluate the Congestion Sensitive BuFLO scheme outlined by Cai, et al.
CS-BuFLO, which is based on the provably-secure BuFLO defense proposed by Dyer,
et al., was not fully-specified by Cai, et al, but has nonetheless attracted
the attention of the Tor developers. Our experiments find that CS-BuFLO has
high overhead (around 2.3-2.8x) but can get 6x closer to the bandwidth/security
trade-off lower bound than Tor or plain SSH.
|
[
{
"created": "Thu, 23 Jan 2014 15:55:20 GMT",
"version": "v1"
}
] |
2014-01-24
|
[
[
"Cai",
"Xiang",
""
],
[
"Nithyanand",
"Rishab",
""
],
[
"Johnson",
"Rob",
""
]
] |
Website fingerprinting attacks enable an adversary to infer which website a victim is visiting, even if the victim uses an encrypting proxy, such as Tor. Previous work has shown that all proposed defenses against website fingerprinting attacks are ineffective. This paper advances the study of website fingerprinting attacks and defenses in two ways. First, we develop bounds on the trade-off between security and bandwidth overhead that any fingerprinting defense scheme can achieve. This enables us to compare schemes with different security/overhead trade-offs by comparing how close they are to the lower bound. We then refine, implement, and evaluate the Congestion Sensitive BuFLO scheme outlined by Cai, et al. CS-BuFLO, which is based on the provably-secure BuFLO defense proposed by Dyer, et al., was not fully-specified by Cai, et al, but has nonetheless attracted the attention of the Tor developers. Our experiments find that CS-BuFLO has high overhead (around 2.3-2.8x) but can get 6x closer to the bandwidth/security trade-off lower bound than Tor or plain SSH.
|
2205.02919
|
Camilo Sarmiento
|
Camilo Sarmiento, Gauvain Bourgne, Katsumi Inoue, Daniele Cavalli,
Jean-Gabriel Ganascia
|
Action Languages Based Actual Causality for Computational Ethics: a
Sound and Complete Implementation in ASP
|
22 pages, 7 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Although moral responsibility is not circumscribed by causality, they are
both closely intermixed. Furthermore, rationally understanding the evolution of
the physical world is inherently linked with the idea of causality. Thus, the
decision-making applications based on automated planning inevitably have to
deal with causality, especially if they consider imputability aspects or
integrate references to ethical norms. The many debates around causation in the
last decades have shown how complex this notion is and thus, how difficult is
its integration with planning. As a result, much of the work in computational
ethics relegates causality to the background, despite the considerations stated
above. This paper's contribution is to provide a complete and sound translation
into logic programming from an actual causation definition suitable for action
languages, this definition is a formalisation of Wright's NESS test. The
obtained logic program allows to deal with complex causal relations. In
addition to enabling agents to reason about causality, this contribution
specifically enables the computational ethics domain to handle situations that
were previously out of reach. In a context where ethical considerations in
decision-making are increasingly important, advances in computational ethics
can greatly benefit the entire AI community.
|
[
{
"created": "Thu, 5 May 2022 21:00:59 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2023 12:43:13 GMT",
"version": "v2"
}
] |
2023-05-25
|
[
[
"Sarmiento",
"Camilo",
""
],
[
"Bourgne",
"Gauvain",
""
],
[
"Inoue",
"Katsumi",
""
],
[
"Cavalli",
"Daniele",
""
],
[
"Ganascia",
"Jean-Gabriel",
""
]
] |
Although moral responsibility is not circumscribed by causality, they are both closely intermixed. Furthermore, rationally understanding the evolution of the physical world is inherently linked with the idea of causality. Thus, the decision-making applications based on automated planning inevitably have to deal with causality, especially if they consider imputability aspects or integrate references to ethical norms. The many debates around causation in the last decades have shown how complex this notion is and thus, how difficult is its integration with planning. As a result, much of the work in computational ethics relegates causality to the background, despite the considerations stated above. This paper's contribution is to provide a complete and sound translation into logic programming from an actual causation definition suitable for action languages, this definition is a formalisation of Wright's NESS test. The obtained logic program allows to deal with complex causal relations. In addition to enabling agents to reason about causality, this contribution specifically enables the computational ethics domain to handle situations that were previously out of reach. In a context where ethical considerations in decision-making are increasingly important, advances in computational ethics can greatly benefit the entire AI community.
|
2207.08605
|
Subhankar Roy
|
Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, Elisa Ricci
|
Class-incremental Novel Class Discovery
|
ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We study the new task of class-incremental Novel Class Discovery
(class-iNCD), which refers to the problem of discovering novel categories in an
unlabelled data set by leveraging a pre-trained model that has been trained on
a labelled data set containing disjoint yet related categories. Apart from
discovering novel classes, we also aim at preserving the ability of the model
to recognize previously seen base categories. Inspired by rehearsal-based
incremental learning methods, in this paper we propose a novel approach for
class-iNCD which prevents forgetting of past information about the base classes
by jointly exploiting base class feature prototypes and feature-level knowledge
distillation. We also propose a self-training clustering strategy that
simultaneously clusters novel categories and trains a joint classifier for both
the base and novel classes. This makes our method able to operate in a
class-incremental setting. Our experiments, conducted on three common
benchmarks, demonstrate that our method significantly outperforms
state-of-the-art approaches. Code is available at
https://github.com/OatmealLiu/class-iNCD
|
[
{
"created": "Mon, 18 Jul 2022 13:49:27 GMT",
"version": "v1"
}
] |
2022-07-19
|
[
[
"Roy",
"Subhankar",
""
],
[
"Liu",
"Mingxuan",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Ricci",
"Elisa",
""
]
] |
We study the new task of class-incremental Novel Class Discovery (class-iNCD), which refers to the problem of discovering novel categories in an unlabelled data set by leveraging a pre-trained model that has been trained on a labelled data set containing disjoint yet related categories. Apart from discovering novel classes, we also aim at preserving the ability of the model to recognize previously seen base categories. Inspired by rehearsal-based incremental learning methods, in this paper we propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes by jointly exploiting base class feature prototypes and feature-level knowledge distillation. We also propose a self-training clustering strategy that simultaneously clusters novel categories and trains a joint classifier for both the base and novel classes. This makes our method able to operate in a class-incremental setting. Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches. Code is available at https://github.com/OatmealLiu/class-iNCD
|
1710.04582
|
Eleni Vasilaki D.Phil.
|
Eleni Vasilaki
|
Is Epicurus the father of Reinforcement Learning?
|
4 pages
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Epicurean Philosophy is commonly thought as simplistic and hedonistic.
Here I discuss how this is a misconception and explore its link to
Reinforcement Learning. Based on the letters of Epicurus, I construct an
objective function for hedonism which turns out to be equivalent of the
Reinforcement Learning objective function when omitting the discount factor. I
then discuss how Plato and Aristotle 's views that can be also loosely linked
to Reinforcement Learning, as well as their weaknesses in relationship to it.
Finally, I emphasise the close affinity of the Epicurean views and the Bellman
equation.
|
[
{
"created": "Thu, 12 Oct 2017 16:07:18 GMT",
"version": "v1"
}
] |
2017-10-13
|
[
[
"Vasilaki",
"Eleni",
""
]
] |
The Epicurean Philosophy is commonly thought as simplistic and hedonistic. Here I discuss how this is a misconception and explore its link to Reinforcement Learning. Based on the letters of Epicurus, I construct an objective function for hedonism which turns out to be equivalent of the Reinforcement Learning objective function when omitting the discount factor. I then discuss how Plato and Aristotle 's views that can be also loosely linked to Reinforcement Learning, as well as their weaknesses in relationship to it. Finally, I emphasise the close affinity of the Epicurean views and the Bellman equation.
|
2208.05065
|
Yuzhu Sun
|
Yuzhu Sun, Mien Van, Stephen McIlvanna, Sean McLoone and Dariusz
Ceglarek
|
Fixed-time Integral Sliding Mode Control for Admittance Control of a
Robot Manipulator
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel fixed-time integral sliding mode controller for
admittance control to enhance physical human-robot collaboration. The proposed
method combines the benefits of compliance to external forces of admittance
control and high robustness to uncertainties of integral sliding mode control
(ISMC), such that the system can collaborate with a human partner in an
uncertain environment effectively. Firstly, a fixed-time sliding surface is
applied in the ISMC to make the tracking error of the system converge within a
fixed-time regardless of the initial condition. Then, a fixed-time backstepping
controller (BSP) is integrated into the ISMC as the nominal controller to
realize global fixed-time convergence. Furthermore, to overcome the singularity
problem, a non-singular fixed-time sliding surface is designed and integrated
into the controller, which is useful for practical application. Finally, the
proposed controller is validated for a two-link robot manipulator with
uncertainties and external human forces. The results show that the proposed
controller is superior in the sense of both tracking error and convergence
time, and at the same time, can comply with human motion in a shared workspace.
|
[
{
"created": "Tue, 9 Aug 2022 22:47:19 GMT",
"version": "v1"
}
] |
2022-08-11
|
[
[
"Sun",
"Yuzhu",
""
],
[
"Van",
"Mien",
""
],
[
"McIlvanna",
"Stephen",
""
],
[
"McLoone",
"Sean",
""
],
[
"Ceglarek",
"Dariusz",
""
]
] |
This paper proposes a novel fixed-time integral sliding mode controller for admittance control to enhance physical human-robot collaboration. The proposed method combines the benefits of compliance to external forces of admittance control and high robustness to uncertainties of integral sliding mode control (ISMC), such that the system can collaborate with a human partner in an uncertain environment effectively. Firstly, a fixed-time sliding surface is applied in the ISMC to make the tracking error of the system converge within a fixed-time regardless of the initial condition. Then, a fixed-time backstepping controller (BSP) is integrated into the ISMC as the nominal controller to realize global fixed-time convergence. Furthermore, to overcome the singularity problem, a non-singular fixed-time sliding surface is designed and integrated into the controller, which is useful for practical application. Finally, the proposed controller is validated for a two-link robot manipulator with uncertainties and external human forces. The results show that the proposed controller is superior in the sense of both tracking error and convergence time, and at the same time, can comply with human motion in a shared workspace.
|
1910.12056
|
Zhijie Wu
|
Chunjin Song, Zhijie Wu, Yang Zhou, Minglun Gong, Hui Huang
|
ETNet: Error Transition Network for Arbitrary Style Transfer
|
Accepted by NeurIPS 2019
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous valuable efforts have been devoted to achieving arbitrary style
transfer since the seminal work of Gatys et al. However, existing
state-of-the-art approaches often generate insufficiently stylized results
under challenging cases. We believe a fundamental reason is that these
approaches try to generate the stylized result in a single shot and hence fail
to fully satisfy the constraints on semantic structures in the content images
and style patterns in the style images. Inspired by the works on
error-correction, instead, we propose a self-correcting model to predict what
is wrong with the current stylization and refine it accordingly in an iterative
manner. For each refinement, we transit the error features across both the
spatial and scale domain and invert the processed features into a residual
image, with a network we call Error Transition Network (ETNet). The proposed
model improves over the state-of-the-art methods with better semantic
structures and more adaptive style pattern details. Various qualitative and
quantitative experiments show that the key concept of both progressive strategy
and error-correction leads to better results. Code and models are available at
https://github.com/zhijieW94/ETNet.
|
[
{
"created": "Sat, 26 Oct 2019 12:49:00 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Oct 2019 17:04:01 GMT",
"version": "v2"
}
] |
2019-10-30
|
[
[
"Song",
"Chunjin",
""
],
[
"Wu",
"Zhijie",
""
],
[
"Zhou",
"Yang",
""
],
[
"Gong",
"Minglun",
""
],
[
"Huang",
"Hui",
""
]
] |
Numerous valuable efforts have been devoted to achieving arbitrary style transfer since the seminal work of Gatys et al. However, existing state-of-the-art approaches often generate insufficiently stylized results under challenging cases. We believe a fundamental reason is that these approaches try to generate the stylized result in a single shot and hence fail to fully satisfy the constraints on semantic structures in the content images and style patterns in the style images. Inspired by the works on error-correction, instead, we propose a self-correcting model to predict what is wrong with the current stylization and refine it accordingly in an iterative manner. For each refinement, we transit the error features across both the spatial and scale domain and invert the processed features into a residual image, with a network we call Error Transition Network (ETNet). The proposed model improves over the state-of-the-art methods with better semantic structures and more adaptive style pattern details. Various qualitative and quantitative experiments show that the key concept of both progressive strategy and error-correction leads to better results. Code and models are available at https://github.com/zhijieW94/ETNet.
|
2307.00134
|
Giuseppe Alessio D'Inverno
|
Giuseppe Alessio D'Inverno and Simone Brugiapaglia and Mirco Ravanelli
|
Generalization Limits of Graph Neural Networks in Identity Effects
Learning
|
13 pages, 10 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven
learning on various graph domains. They are usually based on a message-passing
mechanism and have gained increasing popularity for their intuitive
formulation, which is closely linked to the Weisfeiler-Lehman (WL) test for
graph isomorphism to which they have been proven equivalent in terms of
expressive power. In this work, we establish new generalization properties and
fundamental limits of GNNs in the context of learning so-called identity
effects, i.e., the task of determining whether an object is composed of two
identical components or not. Our study is motivated by the need to understand
the capabilities of GNNs when performing simple cognitive tasks, with potential
applications in computational linguistics and chemistry. We analyze two case
studies: (i) two-letters words, for which we show that GNNs trained via
stochastic gradient descent are unable to generalize to unseen letters when
utilizing orthogonal encodings like one-hot representations; (ii) dicyclic
graphs, i.e., graphs composed of two cycles, for which we present positive
existence results leveraging the connection between GNNs and the WL test. Our
theoretical analysis is supported by an extensive numerical study.
|
[
{
"created": "Fri, 30 Jun 2023 20:56:38 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 17:57:48 GMT",
"version": "v2"
},
{
"created": "Tue, 31 Oct 2023 23:20:31 GMT",
"version": "v3"
}
] |
2023-11-02
|
[
[
"D'Inverno",
"Giuseppe Alessio",
""
],
[
"Brugiapaglia",
"Simone",
""
],
[
"Ravanelli",
"Mirco",
""
]
] |
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains. They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler-Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power. In this work, we establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects, i.e., the task of determining whether an object is composed of two identical components or not. Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks, with potential applications in computational linguistics and chemistry. We analyze two case studies: (i) two-letters words, for which we show that GNNs trained via stochastic gradient descent are unable to generalize to unseen letters when utilizing orthogonal encodings like one-hot representations; (ii) dicyclic graphs, i.e., graphs composed of two cycles, for which we present positive existence results leveraging the connection between GNNs and the WL test. Our theoretical analysis is supported by an extensive numerical study.
|
1810.12266
|
Adam Lopez
|
Ieva Vasiljeva, Sorcha Gilroy, Adam Lopez
|
The problem with probabilistic DAG automata for semantic graphs
|
To appear in NAACL-HLT 2019
| null | null | null |
cs.FL cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic representations in the form of directed acyclic graphs (DAGs) have
been introduced in recent years, and to model them, we need probabilistic
models of DAGs. One model that has attracted some attention is the DAG
automaton, but it has not been studied as a probabilistic model. We show that
some DAG automata cannot be made into useful probabilistic models by the nearly
universal strategy of assigning weights to transitions. The problem affects
single-rooted, multi-rooted, and unbounded-degree variants of DAG automata, and
appears to be pervasive. It does not affect planar variants, but these are
problematic for other reasons.
|
[
{
"created": "Mon, 29 Oct 2018 17:24:57 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Apr 2019 14:41:39 GMT",
"version": "v2"
}
] |
2019-04-09
|
[
[
"Vasiljeva",
"Ieva",
""
],
[
"Gilroy",
"Sorcha",
""
],
[
"Lopez",
"Adam",
""
]
] |
Semantic representations in the form of directed acyclic graphs (DAGs) have been introduced in recent years, and to model them, we need probabilistic models of DAGs. One model that has attracted some attention is the DAG automaton, but it has not been studied as a probabilistic model. We show that some DAG automata cannot be made into useful probabilistic models by the nearly universal strategy of assigning weights to transitions. The problem affects single-rooted, multi-rooted, and unbounded-degree variants of DAG automata, and appears to be pervasive. It does not affect planar variants, but these are problematic for other reasons.
|
2405.17713
|
Micah Carroll
|
Micah Carroll, Davis Foote, Anand Siththaranjan, Stuart Russell, Anca
Dragan
|
AI Alignment with Changing and Influenceable Reward Functions
|
Accepted to ICML 2024
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Existing AI alignment approaches assume that preferences are static, which is
unrealistic: our preferences change, and may even be influenced by our
interactions with AI systems themselves. To clarify the consequences of
incorrectly assuming static preferences, we introduce Dynamic Reward Markov
Decision Processes (DR-MDPs), which explicitly model preference changes and the
AI's influence on them. We show that despite its convenience, the
static-preference assumption may undermine the soundness of existing alignment
techniques, leading them to implicitly reward AI systems for influencing user
preferences in ways users may not truly want. We then explore potential
solutions. First, we offer a unifying perspective on how an agent's
optimization horizon may partially help reduce undesirable AI influence. Then,
we formalize different notions of AI alignment that account for preference
change from the outset. Comparing the strengths and limitations of 8 such
notions of alignment, we find that they all either err towards causing
undesirable AI influence, or are overly risk-averse, suggesting that a
straightforward solution to the problems of changing preferences may not exist.
As there is no avoiding grappling with changing preferences in real-world
settings, this makes it all the more important to handle these issues with
care, balancing risks and capabilities. We hope our work can provide conceptual
clarity and constitute a first step towards AI alignment practices which
explicitly account for (and contend with) the changing and influenceable nature
of human preferences.
|
[
{
"created": "Tue, 28 May 2024 00:08:46 GMT",
"version": "v1"
}
] |
2024-05-29
|
[
[
"Carroll",
"Micah",
""
],
[
"Foote",
"Davis",
""
],
[
"Siththaranjan",
"Anand",
""
],
[
"Russell",
"Stuart",
""
],
[
"Dragan",
"Anca",
""
]
] |
Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-MDPs), which explicitly model preference changes and the AI's influence on them. We show that despite its convenience, the static-preference assumption may undermine the soundness of existing alignment techniques, leading them to implicitly reward AI systems for influencing user preferences in ways users may not truly want. We then explore potential solutions. First, we offer a unifying perspective on how an agent's optimization horizon may partially help reduce undesirable AI influence. Then, we formalize different notions of AI alignment that account for preference change from the outset. Comparing the strengths and limitations of 8 such notions of alignment, we find that they all either err towards causing undesirable AI influence, or are overly risk-averse, suggesting that a straightforward solution to the problems of changing preferences may not exist. As there is no avoiding grappling with changing preferences in real-world settings, this makes it all the more important to handle these issues with care, balancing risks and capabilities. We hope our work can provide conceptual clarity and constitute a first step towards AI alignment practices which explicitly account for (and contend with) the changing and influenceable nature of human preferences.
|
1901.10795
|
Mohammadreza Mousaei
|
Heather Jones, Siri Maley, Kenji Yonekawa, Mohammadreza Mousaei, J.
David Yesso, David Kohanbash, William Whittaker
|
Automated Analysis, Reporting, and Archiving for Robotic Nondestructive
Assay of Holdup Deposits
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To decommission deactivated gaseous diffusion enrichment facilities, miles of
contaminated pipe must be measured. The current method requires thousands of
manual measurements, repeated manual data transcription, and months of manual
analysis. The Pipe Crawling Activity Measurement System (PCAMS), developed by
Carnegie Mellon University and in commissioning for use at the DOE Portsmouth
Gaseous Diffusion Enrichment Facility, uses a robot to measure Uranium-235 from
inside pipes and automatically log the data. Radiation measurements, as well as
imagery, geometric modeling, and precise measurement positioning data are
digitally transferred to the PCAMS server. On the server, data can be
automatically processed in minutes and summarized for analyst review.
Measurement reports are auto-generated with the push of a button. A database
specially-configured to hold heterogeneous data such as spectra, images, and
robot trajectories serves as archive. This paper outlines the features and
design of the PCAMS Post-Processing Software, currently in commissioning for
use at the Portsmouth Gaseous Diffusion Enrichment Facility. The analysis
process, the analyst interface to the system, and the content of auto-generated
reports are each described. Example pipe-interior geometric surface models,
illustration of how key report features apply in operational runs, and user
feedback are discussed.
|
[
{
"created": "Tue, 29 Jan 2019 15:46:24 GMT",
"version": "v1"
}
] |
2019-02-27
|
[
[
"Jones",
"Heather",
""
],
[
"Maley",
"Siri",
""
],
[
"Yonekawa",
"Kenji",
""
],
[
"Mousaei",
"Mohammadreza",
""
],
[
"Yesso",
"J. David",
""
],
[
"Kohanbash",
"David",
""
],
[
"Whittaker",
"William",
""
]
] |
To decommission deactivated gaseous diffusion enrichment facilities, miles of contaminated pipe must be measured. The current method requires thousands of manual measurements, repeated manual data transcription, and months of manual analysis. The Pipe Crawling Activity Measurement System (PCAMS), developed by Carnegie Mellon University and in commissioning for use at the DOE Portsmouth Gaseous Diffusion Enrichment Facility, uses a robot to measure Uranium-235 from inside pipes and automatically log the data. Radiation measurements, as well as imagery, geometric modeling, and precise measurement positioning data are digitally transferred to the PCAMS server. On the server, data can be automatically processed in minutes and summarized for analyst review. Measurement reports are auto-generated with the push of a button. A database specially-configured to hold heterogeneous data such as spectra, images, and robot trajectories serves as archive. This paper outlines the features and design of the PCAMS Post-Processing Software, currently in commissioning for use at the Portsmouth Gaseous Diffusion Enrichment Facility. The analysis process, the analyst interface to the system, and the content of auto-generated reports are each described. Example pipe-interior geometric surface models, illustration of how key report features apply in operational runs, and user feedback are discussed.
|
2312.09249
|
Xinyue Wei
|
Ruoxi Shi, Xinyue Wei, Cheng Wang, Hao Su
|
ZeroRF: Fast Sparse View 360{\deg} Reconstruction with Zero Pretraining
|
Project page: https://sarahweiii.github.io/zerorf/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ZeroRF, a novel per-scene optimization method addressing the
challenge of sparse view 360{\deg} reconstruction in neural field
representations. Current breakthroughs like Neural Radiance Fields (NeRF) have
demonstrated high-fidelity image synthesis but struggle with sparse input
views. Existing methods, such as Generalizable NeRFs and per-scene optimization
approaches, face limitations in data dependency, computational cost, and
generalization across diverse scenarios. To overcome these challenges, we
propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into
a factorized NeRF representation. Unlike traditional methods, ZeroRF
parametrizes feature grids with a neural network generator, enabling efficient
sparse view 360{\deg} reconstruction without any pretraining or additional
regularization. Extensive experiments showcase ZeroRF's versatility and
superiority in terms of both quality and speed, achieving state-of-the-art
results on benchmark datasets. ZeroRF's significance extends to applications in
3D content generation and editing. Project page:
https://sarahweiii.github.io/zerorf/
|
[
{
"created": "Thu, 14 Dec 2023 18:59:32 GMT",
"version": "v1"
}
] |
2023-12-15
|
[
[
"Shi",
"Ruoxi",
""
],
[
"Wei",
"Xinyue",
""
],
[
"Wang",
"Cheng",
""
],
[
"Su",
"Hao",
""
]
] |
We present ZeroRF, a novel per-scene optimization method addressing the challenge of sparse view 360{\deg} reconstruction in neural field representations. Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views. Existing methods, such as Generalizable NeRFs and per-scene optimization approaches, face limitations in data dependency, computational cost, and generalization across diverse scenarios. To overcome these challenges, we propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation. Unlike traditional methods, ZeroRF parametrizes feature grids with a neural network generator, enabling efficient sparse view 360{\deg} reconstruction without any pretraining or additional regularization. Extensive experiments showcase ZeroRF's versatility and superiority in terms of both quality and speed, achieving state-of-the-art results on benchmark datasets. ZeroRF's significance extends to applications in 3D content generation and editing. Project page: https://sarahweiii.github.io/zerorf/
|
0710.4685
|
EDA Publishing Association
|
C. Bolchini, F. Salice, D. Sciuto, L. Pomante
|
Reliable System Specification for Self-Checking Data-Paths
|
Submitted on behalf of EDAA (http://www.edaa.com/)
|
Dans Design, Automation and Test in Europe - DATE'05, Munich :
Allemagne (2005)
| null | null |
cs.AR
| null |
The design of reliable circuits has received a lot of attention in the past,
leading to the definition of several design techniques introducing fault
detection and fault tolerance properties in systems for critical
applications/environments. Such design methodologies tackled the problem at
different abstraction levels, from switch-level to logic, RT level, and more
recently to system level. Aim of this paper is to introduce a novel
system-level technique based on the redefinition of the operators functionality
in the system specification. This technique provides reliability properties to
the system data path, transparently with respect to the designer. Feasibility,
fault coverage, performance degradation and overheads are investigated on a FIR
circuit.
|
[
{
"created": "Thu, 25 Oct 2007 09:08:39 GMT",
"version": "v1"
}
] |
2011-11-09
|
[
[
"Bolchini",
"C.",
""
],
[
"Salice",
"F.",
""
],
[
"Sciuto",
"D.",
""
],
[
"Pomante",
"L.",
""
]
] |
The design of reliable circuits has received a lot of attention in the past, leading to the definition of several design techniques introducing fault detection and fault tolerance properties in systems for critical applications/environments. Such design methodologies tackled the problem at different abstraction levels, from switch-level to logic, RT level, and more recently to system level. Aim of this paper is to introduce a novel system-level technique based on the redefinition of the operators functionality in the system specification. This technique provides reliability properties to the system data path, transparently with respect to the designer. Feasibility, fault coverage, performance degradation and overheads are investigated on a FIR circuit.
|
2008.10192
|
Radoslav Fulek
|
Alon Efrat, Radoslav Fulek, Stephen Kobourov, Csaba D. T\'oth
|
Polygons with Prescribed Angles in 2D and 3D
|
15 pages, 9 figures, a new section about self-intersecting
realizations in 3D
| null | null | null |
cs.CG cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the construction of a polygon $P$ with $n$ vertices whose turning
angles at the vertices are given by a sequence $A=(\alpha_0,\ldots,
\alpha_{n-1})$, $\alpha_i\in (-\pi,\pi)$, for $i\in\{0,\ldots, n-1\}$. The
problem of realizing $A$ by a polygon can be seen as that of constructing a
straight-line drawing of a graph with prescribed angles at vertices, and hence,
it is a special case of the well studied problem of constructing an \emph{angle
graph}.
In 2D, we characterize sequences $A$ for which every generic polygon
$P\subset \mathbb{R}^2$ realizing $A$ has at least $c$ crossings, for every
$c\in \mathbb{N}$, and describe an efficient algorithm that constructs, for a
given sequence $A$, a generic polygon $P\subset \mathbb{R}^2$ that realizes $A$
with the minimum number of crossings.
In 3D, we describe an efficient algorithm that tests whether a given sequence
$A$ can be realized by a (not necessarily generic) polygon $P\subset
\mathbb{R}^3$, and for every realizable sequence the algorithm finds a
realization.
|
[
{
"created": "Mon, 24 Aug 2020 05:19:06 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Nov 2020 16:51:06 GMT",
"version": "v2"
}
] |
2020-11-03
|
[
[
"Efrat",
"Alon",
""
],
[
"Fulek",
"Radoslav",
""
],
[
"Kobourov",
"Stephen",
""
],
[
"Tóth",
"Csaba D.",
""
]
] |
We consider the construction of a polygon $P$ with $n$ vertices whose turning angles at the vertices are given by a sequence $A=(\alpha_0,\ldots, \alpha_{n-1})$, $\alpha_i\in (-\pi,\pi)$, for $i\in\{0,\ldots, n-1\}$. The problem of realizing $A$ by a polygon can be seen as that of constructing a straight-line drawing of a graph with prescribed angles at vertices, and hence, it is a special case of the well studied problem of constructing an \emph{angle graph}. In 2D, we characterize sequences $A$ for which every generic polygon $P\subset \mathbb{R}^2$ realizing $A$ has at least $c$ crossings, for every $c\in \mathbb{N}$, and describe an efficient algorithm that constructs, for a given sequence $A$, a generic polygon $P\subset \mathbb{R}^2$ that realizes $A$ with the minimum number of crossings. In 3D, we describe an efficient algorithm that tests whether a given sequence $A$ can be realized by a (not necessarily generic) polygon $P\subset \mathbb{R}^3$, and for every realizable sequence the algorithm finds a realization.
|
2302.11843
|
K. J. Kevin Feng
|
K. J. Kevin Feng and David W. McDonald
|
Addressing UX Practitioners' Challenges in Designing ML Applications: an
Interactive Machine Learning Approach
| null | null |
10.1145/3581641.3584064
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
UX practitioners face novel challenges when designing user interfaces for
machine learning (ML)-enabled applications. Interactive ML paradigms, like
AutoML and interactive machine teaching, lower the barrier for non-expert end
users to create, understand, and use ML models, but their application to UX
practice is largely unstudied. We conducted a task-based design study with 27
UX practitioners where we asked them to propose a proof-of-concept design for a
new ML-enabled application. During the task, our participants were given
opportunities to create, test, and modify ML models as part of their workflows.
Through a qualitative analysis of our post-task interview, we found that
direct, interactive experimentation with ML allowed UX practitioners to tie ML
capabilities and underlying data to user goals, compose affordances to enhance
end-user interactions with ML, and identify ML-related ethical risks and
challenges. We discuss our findings in the context of previously established
human-AI guidelines. We also identify some limitations of interactive ML in UX
processes and propose research-informed machine teaching as a supplement to
future design tools alongside interactive ML.
|
[
{
"created": "Thu, 23 Feb 2023 08:18:41 GMT",
"version": "v1"
}
] |
2023-02-24
|
[
[
"Feng",
"K. J. Kevin",
""
],
[
"McDonald",
"David W.",
""
]
] |
UX practitioners face novel challenges when designing user interfaces for machine learning (ML)-enabled applications. Interactive ML paradigms, like AutoML and interactive machine teaching, lower the barrier for non-expert end users to create, understand, and use ML models, but their application to UX practice is largely unstudied. We conducted a task-based design study with 27 UX practitioners where we asked them to propose a proof-of-concept design for a new ML-enabled application. During the task, our participants were given opportunities to create, test, and modify ML models as part of their workflows. Through a qualitative analysis of our post-task interview, we found that direct, interactive experimentation with ML allowed UX practitioners to tie ML capabilities and underlying data to user goals, compose affordances to enhance end-user interactions with ML, and identify ML-related ethical risks and challenges. We discuss our findings in the context of previously established human-AI guidelines. We also identify some limitations of interactive ML in UX processes and propose research-informed machine teaching as a supplement to future design tools alongside interactive ML.
|
2106.05568
|
Julie Gerlings
|
Julie Gerlings, Millie S{\o}ndergaard Jensen and Arisa Shollo
|
Explainable AI, but explainable to whom?
|
Book chapter for AI in Healthcare
| null |
10.1007/978-3-030-83620-7_7
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in AI technologies have resulted in superior levels of AI-based
model performance. However, this has also led to a greater degree of model
complexity, resulting in 'black box' models. In response to the AI black box
problem, the field of explainable AI (xAI) has emerged with the aim of
providing explanations catered to human understanding, trust, and transparency.
Yet, we still have a limited understanding of how xAI addresses the need for
explainable AI in the context of healthcare. Our research explores the
differing explanation needs amongst stakeholders during the development of an
AI-system for classifying COVID-19 patients for the ICU. We demonstrate that
there is a constellation of stakeholders who have different explanation needs,
not just the 'user'. Further, the findings demonstrate how the need for xAI
emerges through concerns associated with specific stakeholder groups i.e., the
development team, subject matter experts, decision makers, and the audience.
Our findings contribute to the expansion of xAI by highlighting that different
stakeholders have different explanation needs. From a practical perspective,
the study provides insights on how AI systems can be adjusted to support
different stakeholders needs, ensuring better implementation and operation in a
healthcare context.
|
[
{
"created": "Thu, 10 Jun 2021 07:47:33 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Oct 2022 11:20:06 GMT",
"version": "v2"
}
] |
2022-10-25
|
[
[
"Gerlings",
"Julie",
""
],
[
"Jensen",
"Millie Søndergaard",
""
],
[
"Shollo",
"Arisa",
""
]
] |
Advances in AI technologies have resulted in superior levels of AI-based model performance. However, this has also led to a greater degree of model complexity, resulting in 'black box' models. In response to the AI black box problem, the field of explainable AI (xAI) has emerged with the aim of providing explanations catered to human understanding, trust, and transparency. Yet, we still have a limited understanding of how xAI addresses the need for explainable AI in the context of healthcare. Our research explores the differing explanation needs amongst stakeholders during the development of an AI-system for classifying COVID-19 patients for the ICU. We demonstrate that there is a constellation of stakeholders who have different explanation needs, not just the 'user'. Further, the findings demonstrate how the need for xAI emerges through concerns associated with specific stakeholder groups i.e., the development team, subject matter experts, decision makers, and the audience. Our findings contribute to the expansion of xAI by highlighting that different stakeholders have different explanation needs. From a practical perspective, the study provides insights on how AI systems can be adjusted to support different stakeholders needs, ensuring better implementation and operation in a healthcare context.
|
2308.02568
|
Juan Manuel Rodriguez
|
Juan Manuel Rodriguez and Antonela Tommasel
|
Weighted Multi-Level Feature Factorization for App ads CTR and
installation prediction
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper provides an overview of the approach we used as team ISISTANITOS
for the ACM RecSys Challenge 2023. The competition was organized by ShareChat,
and involved predicting the probability of a user clicking an app ad and/or
installing an app, to improve deep funnel optimization and a special focus on
user privacy. Our proposed method inferring the probabilities of clicking and
installing as two different, but related tasks. Hence, the model engineers a
specific set of features for each task and a set of shared features. Our model
is called Weighted Multi-Level Feature Factorization because it considers the
interaction of different order features, where the order is associated to the
depth in a neural network. The prediction for a given task is generated by
combining the task specific and shared features on the different levels. Our
submission achieved the 11 rank and overall score of 55 in the competition
academia-track final results. We release our source code at:
https://github.com/knife982000/RecSys2023Challenge
|
[
{
"created": "Thu, 3 Aug 2023 08:56:24 GMT",
"version": "v1"
}
] |
2023-08-08
|
[
[
"Rodriguez",
"Juan Manuel",
""
],
[
"Tommasel",
"Antonela",
""
]
] |
This paper provides an overview of the approach we used as team ISISTANITOS for the ACM RecSys Challenge 2023. The competition was organized by ShareChat, and involved predicting the probability of a user clicking an app ad and/or installing an app, to improve deep funnel optimization and a special focus on user privacy. Our proposed method inferring the probabilities of clicking and installing as two different, but related tasks. Hence, the model engineers a specific set of features for each task and a set of shared features. Our model is called Weighted Multi-Level Feature Factorization because it considers the interaction of different order features, where the order is associated to the depth in a neural network. The prediction for a given task is generated by combining the task specific and shared features on the different levels. Our submission achieved the 11 rank and overall score of 55 in the competition academia-track final results. We release our source code at: https://github.com/knife982000/RecSys2023Challenge
|
2209.13464
|
Zhijian Ou
|
Hong Liu, Hao Peng, Zhijian Ou, Juanzi Li, Yi Huang and Junlan Feng
|
Information Extraction and Human-Robot Dialogue towards Real-life Tasks:
A Baseline Study with the MobileCS Dataset
|
Accepted by EMNLP 2022 SereTOD Workshop
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, there have merged a class of task-oriented dialogue (TOD) datasets
collected through Wizard-of-Oz simulated games. However, the Wizard-of-Oz data
are in fact simulated data and thus are fundamentally different from real-life
conversations, which are more noisy and casual. Recently, the SereTOD challenge
is organized and releases the MobileCS dataset, which consists of real-world
dialog transcripts between real users and customer-service staffs from China
Mobile. Based on the MobileCS dataset, the SereTOD challenge has two tasks, not
only evaluating the construction of the dialogue system itself, but also
examining information extraction from dialog transcripts, which is crucial for
building the knowledge base for TOD. This paper mainly presents a baseline
study of the two tasks with the MobileCS dataset. We introduce how the two
baselines are constructed, the problems encountered, and the results. We
anticipate that the baselines can facilitate exciting future research to build
human-robot dialogue systems for real-life tasks.
|
[
{
"created": "Tue, 27 Sep 2022 15:30:43 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 06:15:28 GMT",
"version": "v2"
}
] |
2022-10-19
|
[
[
"Liu",
"Hong",
""
],
[
"Peng",
"Hao",
""
],
[
"Ou",
"Zhijian",
""
],
[
"Li",
"Juanzi",
""
],
[
"Huang",
"Yi",
""
],
[
"Feng",
"Junlan",
""
]
] |
Recently, there have merged a class of task-oriented dialogue (TOD) datasets collected through Wizard-of-Oz simulated games. However, the Wizard-of-Oz data are in fact simulated data and thus are fundamentally different from real-life conversations, which are more noisy and casual. Recently, the SereTOD challenge is organized and releases the MobileCS dataset, which consists of real-world dialog transcripts between real users and customer-service staffs from China Mobile. Based on the MobileCS dataset, the SereTOD challenge has two tasks, not only evaluating the construction of the dialogue system itself, but also examining information extraction from dialog transcripts, which is crucial for building the knowledge base for TOD. This paper mainly presents a baseline study of the two tasks with the MobileCS dataset. We introduce how the two baselines are constructed, the problems encountered, and the results. We anticipate that the baselines can facilitate exciting future research to build human-robot dialogue systems for real-life tasks.
|
1705.06691
|
Suleiman Yerima
|
Mohammed K. Alzaylaee, Suleiman Y. Yerima, Sakir Sezer
|
Improving Dynamic Analysis of Android Apps Using Hybrid Test Input
Generation
|
International Conference On Cyber Security And Protection Of Digital
Services (Cyber Security 2017)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Android OS has become the most popular mobile operating system leading to
a significant increase in the spread of Android malware. Consequently, several
static and dynamic analysis systems have been developed to detect Android
malware. With dynamic analysis, efficient test input generation is needed in
order to trigger the potential run-time malicious behaviours. Most existing
dynamic analysis systems employ random-based input generation methods usually
built using the Android Monkey tool. Random-based input generation has several
shortcomings including limited code coverage, which motivates us to explore
combining it with a state-based method in order to improve efficiency. Hence,
in this paper, we present a novel hybrid test input generation approach
designed to improve dynamic analysis on real devices. We implemented the hybrid
system by integrating a random based tool (Monkey) with a state based tool
(DroidBot) in order to improve code coverage and potentially uncover more
malicious behaviours. The system is evaluated using 2,444 Android apps
containing 1222 benign and 1222 malware samples from the Android malware genome
project. Three scenarios, random only, state-based only, and our proposed
hybrid approach were investigated to comparatively evaluate their performances.
Our study shows that the hybrid approach significantly improved the amount of
dynamic features extracted from both benign and malware samples over the
state-based and commonly used random test input generation method.
|
[
{
"created": "Thu, 18 May 2017 16:48:20 GMT",
"version": "v1"
}
] |
2017-05-19
|
[
[
"Alzaylaee",
"Mohammed K.",
""
],
[
"Yerima",
"Suleiman Y.",
""
],
[
"Sezer",
"Sakir",
""
]
] |
The Android OS has become the most popular mobile operating system leading to a significant increase in the spread of Android malware. Consequently, several static and dynamic analysis systems have been developed to detect Android malware. With dynamic analysis, efficient test input generation is needed in order to trigger the potential run-time malicious behaviours. Most existing dynamic analysis systems employ random-based input generation methods usually built using the Android Monkey tool. Random-based input generation has several shortcomings including limited code coverage, which motivates us to explore combining it with a state-based method in order to improve efficiency. Hence, in this paper, we present a novel hybrid test input generation approach designed to improve dynamic analysis on real devices. We implemented the hybrid system by integrating a random based tool (Monkey) with a state based tool (DroidBot) in order to improve code coverage and potentially uncover more malicious behaviours. The system is evaluated using 2,444 Android apps containing 1222 benign and 1222 malware samples from the Android malware genome project. Three scenarios, random only, state-based only, and our proposed hybrid approach were investigated to comparatively evaluate their performances. Our study shows that the hybrid approach significantly improved the amount of dynamic features extracted from both benign and malware samples over the state-based and commonly used random test input generation method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.