id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1910.03742
|
Tan Nguyen
|
Tan Nguyen, Nan Ye, Peter L. Bartlett
|
Greedy Convex Ensemble
|
Replace the previous version with the camera ready version accepted
for IJCAI 2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider learning a convex combination of basis models, and present some
new theoretical and empirical results that demonstrate the effectiveness of a
greedy approach. Theoretically, we first consider whether we can use linear,
instead of convex, combinations, and obtain generalization results similar to
existing ones for learning from a convex hull. We obtain a negative result that
even the linear hull of very simple basis functions can have unbounded
capacity, and is thus prone to overfitting; on the other hand, convex hulls are
still rich but have bounded capacities. Secondly, we obtain a generalization
bound for a general class of Lipschitz loss functions. Empirically, we first
discuss how a convex combination can be greedily learned with early stopping,
and how a convex combination can be non-greedily learned when the number of
basis models is known a priori. Our experiments suggest that the greedy scheme
is competitive with or better than several baselines, including boosting and
random forests. The greedy algorithm requires little effort in hyper-parameter
tuning, and also seems able to adapt to the underlying complexity of the
problem. Our code is available at https://github.com/tan1889/gce.
|
[
{
"created": "Wed, 9 Oct 2019 01:41:56 GMT",
"version": "v1"
},
{
"created": "Sun, 3 May 2020 04:18:52 GMT",
"version": "v2"
}
] |
2020-05-05
|
[
[
"Nguyen",
"Tan",
""
],
[
"Ye",
"Nan",
""
],
[
"Bartlett",
"Peter L.",
""
]
] |
We consider learning a convex combination of basis models, and present some new theoretical and empirical results that demonstrate the effectiveness of a greedy approach. Theoretically, we first consider whether we can use linear, instead of convex, combinations, and obtain generalization results similar to existing ones for learning from a convex hull. We obtain a negative result that even the linear hull of very simple basis functions can have unbounded capacity, and is thus prone to overfitting; on the other hand, convex hulls are still rich but have bounded capacities. Secondly, we obtain a generalization bound for a general class of Lipschitz loss functions. Empirically, we first discuss how a convex combination can be greedily learned with early stopping, and how a convex combination can be non-greedily learned when the number of basis models is known a priori. Our experiments suggest that the greedy scheme is competitive with or better than several baselines, including boosting and random forests. The greedy algorithm requires little effort in hyper-parameter tuning, and also seems able to adapt to the underlying complexity of the problem. Our code is available at https://github.com/tan1889/gce.
|
2104.14202
|
Javier Rodriguez-Puigvert
|
Javier Rodr\'iguez-Puigvert, Rub\'en Mart\'inez-Cant\'in, Javier
Civera
|
Bayesian Deep Neural Networks for Supervised Learning of Single-View
Depth
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Uncertainty quantification is essential for robotic perception, as
overconfident or point estimators can lead to collisions and damages to the
environment and the robot. In this paper, we evaluate scalable approaches to
uncertainty quantification in single-view supervised depth learning,
specifically MC dropout and deep ensembles. For MC dropout, in particular, we
explore the effect of the dropout at different levels in the architecture. We
show that adding dropout in all layers of the encoder brings better results
than other variations found in the literature. This configuration performs
similarly to deep ensembles with a much lower memory footprint, which is
relevant forapplications. Finally, we explore the use of depth uncertainty for
pseudo-RGBD ICP and demonstrate its potential to estimate accurate two-view
relative motion with the real scale.
|
[
{
"created": "Thu, 29 Apr 2021 08:45:24 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Sep 2021 09:13:29 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Dec 2021 13:02:45 GMT",
"version": "v3"
}
] |
2021-12-16
|
[
[
"Rodríguez-Puigvert",
"Javier",
""
],
[
"Martínez-Cantín",
"Rubén",
""
],
[
"Civera",
"Javier",
""
]
] |
Uncertainty quantification is essential for robotic perception, as overconfident or point estimators can lead to collisions and damages to the environment and the robot. In this paper, we evaluate scalable approaches to uncertainty quantification in single-view supervised depth learning, specifically MC dropout and deep ensembles. For MC dropout, in particular, we explore the effect of the dropout at different levels in the architecture. We show that adding dropout in all layers of the encoder brings better results than other variations found in the literature. This configuration performs similarly to deep ensembles with a much lower memory footprint, which is relevant forapplications. Finally, we explore the use of depth uncertainty for pseudo-RGBD ICP and demonstrate its potential to estimate accurate two-view relative motion with the real scale.
|
1712.07003
|
Ronan Fablet
|
Ronan Fablet, Said Ouala, Cedric Herzet
|
Bilinear residual Neural Network for the identification and forecasting
of dynamical systems
|
Submitted
| null | null | null |
cs.LG eess.SP physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the increasing availability of large-scale observation and simulation
datasets, data-driven representations arise as efficient and relevant
computation representations of dynamical systems for a wide range of
applications, where model-driven models based on ordinary differential equation
remain the state-of-the-art approaches. In this work, we investigate neural
networks (NN) as physically-sound data-driven representations of such systems.
Reinterpreting Runge-Kutta methods as graphical models, we consider a residual
NN architecture and introduce bilinear layers to embed non-linearities which
are intrinsic features of dynamical systems. From numerical experiments for
classic dynamical systems, we demonstrate the relevance of the proposed
NN-based architecture both in terms of forecasting performance and model
identification.
|
[
{
"created": "Tue, 19 Dec 2017 15:42:40 GMT",
"version": "v1"
}
] |
2017-12-20
|
[
[
"Fablet",
"Ronan",
""
],
[
"Ouala",
"Said",
""
],
[
"Herzet",
"Cedric",
""
]
] |
Due to the increasing availability of large-scale observation and simulation datasets, data-driven representations arise as efficient and relevant computation representations of dynamical systems for a wide range of applications, where model-driven models based on ordinary differential equation remain the state-of-the-art approaches. In this work, we investigate neural networks (NN) as physically-sound data-driven representations of such systems. Reinterpreting Runge-Kutta methods as graphical models, we consider a residual NN architecture and introduce bilinear layers to embed non-linearities which are intrinsic features of dynamical systems. From numerical experiments for classic dynamical systems, we demonstrate the relevance of the proposed NN-based architecture both in terms of forecasting performance and model identification.
|
2406.07716
|
Sm Shaqib
|
Md. Mahmudul Hasan, SM Shaqib, Ms. Sharmin Akter, Rabiul Alam, Afraz
Ul Haque, Shahrun akter khushbu
|
Unleashing the Power of Transfer Learning Model for Sophisticated Insect
Detection: Revolutionizing Insect Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of the Insect Detection System for Crop and Plant Health is to
keep an eye out for and identify insect infestations in farming areas. By
utilizing cutting-edge technology like computer vision and machine learning,
the system seeks to identify hazardous insects early and accurately. This would
enable prompt response to save crops and maintain optimal plant health. The
Method of this study includes Data Acquisition, Preprocessing, Data splitting,
Model Implementation and Model evaluation. Different models like MobileNetV2,
ResNet152V2, Xecption, Custom CNN was used in this study. In order to
categorize insect photos, a Convolutional Neural Network (CNN) based on the
ResNet152V2 architecture is constructed and evaluated in this work. Achieving
99% training accuracy and 97% testing accuracy, ResNet152V2 demonstrates
superior performance among four implemented models. The results highlight its
potential for real-world applications in insect classification and entomology
studies, emphasizing efficiency and accuracy. To ensure food security and
sustain agricultural output globally, finding insects is crucial. Cutting-edge
technology, such as ResNet152V2 models, greatly influence automating and
improving the accuracy of insect identification. Efficient insect detection not
only minimizes crop losses but also enhances agricultural productivity,
contributing to sustainable food production. This underscores the pivotal role
of technology in addressing challenges related to global food security.
|
[
{
"created": "Tue, 11 Jun 2024 20:52:42 GMT",
"version": "v1"
}
] |
2024-06-13
|
[
[
"Hasan",
"Md. Mahmudul",
""
],
[
"Shaqib",
"SM",
""
],
[
"Akter",
"Ms. Sharmin",
""
],
[
"Alam",
"Rabiul",
""
],
[
"Haque",
"Afraz Ul",
""
],
[
"khushbu",
"Shahrun akter",
""
]
] |
The purpose of the Insect Detection System for Crop and Plant Health is to keep an eye out for and identify insect infestations in farming areas. By utilizing cutting-edge technology like computer vision and machine learning, the system seeks to identify hazardous insects early and accurately. This would enable prompt response to save crops and maintain optimal plant health. The Method of this study includes Data Acquisition, Preprocessing, Data splitting, Model Implementation and Model evaluation. Different models like MobileNetV2, ResNet152V2, Xecption, Custom CNN was used in this study. In order to categorize insect photos, a Convolutional Neural Network (CNN) based on the ResNet152V2 architecture is constructed and evaluated in this work. Achieving 99% training accuracy and 97% testing accuracy, ResNet152V2 demonstrates superior performance among four implemented models. The results highlight its potential for real-world applications in insect classification and entomology studies, emphasizing efficiency and accuracy. To ensure food security and sustain agricultural output globally, finding insects is crucial. Cutting-edge technology, such as ResNet152V2 models, greatly influence automating and improving the accuracy of insect identification. Efficient insect detection not only minimizes crop losses but also enhances agricultural productivity, contributing to sustainable food production. This underscores the pivotal role of technology in addressing challenges related to global food security.
|
2401.10150
|
Changgu Chen
|
Changgu Chen, Junwei Shu, Lianggangxu Chen, Gaoqi He, Changbo Wang and
Yang Li
|
Motion-Zero: Zero-Shot Moving Object Control Framework for
Diffusion-Based Video Generation
|
Preprint
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent large-scale pre-trained diffusion models have demonstrated a powerful
generative ability to produce high-quality videos from detailed text
descriptions. However, exerting control over the motion of objects in videos
generated by any video diffusion model is a challenging problem. In this paper,
we propose a novel zero-shot moving object trajectory control framework,
Motion-Zero, to enable a bounding-box-trajectories-controlled text-to-video
diffusion model. To this end, an initial noise prior module is designed to
provide a position-based prior to improve the stability of the appearance of
the moving object and the accuracy of position. In addition, based on the
attention map of the U-net, spatial constraints are directly applied to the
denoising process of diffusion models, which further ensures the positional and
spatial consistency of moving objects during the inference. Furthermore,
temporal consistency is guaranteed with a proposed shift temporal attention
mechanism. Our method can be flexibly applied to various state-of-the-art video
diffusion models without any training process. Extensive experiments
demonstrate our proposed method can control the motion trajectories of objects
and generate high-quality videos.
|
[
{
"created": "Thu, 18 Jan 2024 17:22:37 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2024 04:27:05 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jan 2024 02:40:52 GMT",
"version": "v3"
}
] |
2024-01-23
|
[
[
"Chen",
"Changgu",
""
],
[
"Shu",
"Junwei",
""
],
[
"Chen",
"Lianggangxu",
""
],
[
"He",
"Gaoqi",
""
],
[
"Wang",
"Changbo",
""
],
[
"Li",
"Yang",
""
]
] |
Recent large-scale pre-trained diffusion models have demonstrated a powerful generative ability to produce high-quality videos from detailed text descriptions. However, exerting control over the motion of objects in videos generated by any video diffusion model is a challenging problem. In this paper, we propose a novel zero-shot moving object trajectory control framework, Motion-Zero, to enable a bounding-box-trajectories-controlled text-to-video diffusion model. To this end, an initial noise prior module is designed to provide a position-based prior to improve the stability of the appearance of the moving object and the accuracy of position. In addition, based on the attention map of the U-net, spatial constraints are directly applied to the denoising process of diffusion models, which further ensures the positional and spatial consistency of moving objects during the inference. Furthermore, temporal consistency is guaranteed with a proposed shift temporal attention mechanism. Our method can be flexibly applied to various state-of-the-art video diffusion models without any training process. Extensive experiments demonstrate our proposed method can control the motion trajectories of objects and generate high-quality videos.
|
1305.1459
|
Pier Stanislao Paolucci
|
Pier Stanislao Paolucci, Iuliana Bacivarov, Gert Goossens, Rainer
Leupers, Fr\'ed\'eric Rousseau, Christoph Schumacher, Lothar Thiele, Piero
Vicini
|
EURETILE 2010-2012 summary: first three years of activity of the
European Reference Tiled Experiment
|
56 pages
| null |
10.12837/2013T01
| null |
cs.DC cs.AR cs.NE cs.OS cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is the summary of first three years of activity of the EURETILE FP7
project 247846. EURETILE investigates and implements brain-inspired and
fault-tolerant foundational innovations to the system architecture of massively
parallel tiled computer architectures and the corresponding programming
paradigm. The execution targets are a many-tile HW platform, and a many-tile
simulator. A set of SW process - HW tile mapping candidates is generated by the
holistic SW tool-chain using a combination of analytic and bio-inspired
methods. The Hardware dependent Software is then generated, providing OS
services with maximum efficiency/minimal overhead. The many-tile simulator
collects profiling data, closing the loop of the SW tool chain. Fine-grain
parallelism inside processes is exploited by optimized intra-tile compilation
techniques, but the project focus is above the level of the elementary tile.
The elementary HW tile is a multi-processor, which includes a fault tolerant
Distributed Network Processor (for inter-tile communication) and ASIP
accelerators. Furthermore, EURETILE investigates and implements the innovations
for equipping the elementary HW tile with high-bandwidth, low-latency
brain-like inter-tile communication emulating 3 levels of connection hierarchy,
namely neural columns, cortical areas and cortex, and develops a dedicated
cortical simulation benchmark: DPSNN-STDP (Distributed Polychronous Spiking
Neural Net with synaptic Spiking Time Dependent Plasticity). EURETILE leverages
on the multi-tile HW paradigm and SW tool-chain developed by the FET-ACA SHAPES
Integrated Project (2006-2009).
|
[
{
"created": "Tue, 7 May 2013 10:22:31 GMT",
"version": "v1"
}
] |
2013-06-24
|
[
[
"Paolucci",
"Pier Stanislao",
""
],
[
"Bacivarov",
"Iuliana",
""
],
[
"Goossens",
"Gert",
""
],
[
"Leupers",
"Rainer",
""
],
[
"Rousseau",
"Frédéric",
""
],
[
"Schumacher",
"Christoph",
""
],
[
"Thiele",
"Lothar",
""
],
[
"Vicini",
"Piero",
""
]
] |
This is the summary of first three years of activity of the EURETILE FP7 project 247846. EURETILE investigates and implements brain-inspired and fault-tolerant foundational innovations to the system architecture of massively parallel tiled computer architectures and the corresponding programming paradigm. The execution targets are a many-tile HW platform, and a many-tile simulator. A set of SW process - HW tile mapping candidates is generated by the holistic SW tool-chain using a combination of analytic and bio-inspired methods. The Hardware dependent Software is then generated, providing OS services with maximum efficiency/minimal overhead. The many-tile simulator collects profiling data, closing the loop of the SW tool chain. Fine-grain parallelism inside processes is exploited by optimized intra-tile compilation techniques, but the project focus is above the level of the elementary tile. The elementary HW tile is a multi-processor, which includes a fault tolerant Distributed Network Processor (for inter-tile communication) and ASIP accelerators. Furthermore, EURETILE investigates and implements the innovations for equipping the elementary HW tile with high-bandwidth, low-latency brain-like inter-tile communication emulating 3 levels of connection hierarchy, namely neural columns, cortical areas and cortex, and develops a dedicated cortical simulation benchmark: DPSNN-STDP (Distributed Polychronous Spiking Neural Net with synaptic Spiking Time Dependent Plasticity). EURETILE leverages on the multi-tile HW paradigm and SW tool-chain developed by the FET-ACA SHAPES Integrated Project (2006-2009).
|
2208.03541
|
Pino Caballero-Gil
|
V Mora-Afonso, Pino Caballero-Gil, Jezabel Molina-Gil
|
Strong authentication on smart wireless devices
| null |
Second International Conference on Future Generation Communication
Technologies (FGCT 2013), pp. 137-142,
|
10.1109/FGCT.2013.6767206
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The rapid deployment of wireless technologies has given rise to the current
situation where mobile phones and other wireless devices have become essential
elements in all types of activities, including in the home. In particular,
smartphones and laptops are used for wirelessly sharing photos and documents,
playing games, browsing websites, and viewing multimedia, for example. This
work describes a proposal for both desktop and mobile applications that use
Identity-Based Cryptography (IBC) to protect communications between smart
wireless devices in the home. It combines the use of IBC for Wi-Fi and
Bluetooth communication, with the promising Near Field Communication (NFC)
technology for secure authentication. The proposed scheme involves NFC pairing
to establish as public key a piece of information linked to the device, such as
a phone number or an IP address. In this way, such information can be then used
in an IBC scheme for peer-to-peer communication. This is a work in progress,
but preliminary implementations of prototypes on several mobile platforms have
already produced promising results.
|
[
{
"created": "Sat, 6 Aug 2022 16:42:39 GMT",
"version": "v1"
}
] |
2022-08-09
|
[
[
"Mora-Afonso",
"V",
""
],
[
"Caballero-Gil",
"Pino",
""
],
[
"Molina-Gil",
"Jezabel",
""
]
] |
The rapid deployment of wireless technologies has given rise to the current situation where mobile phones and other wireless devices have become essential elements in all types of activities, including in the home. In particular, smartphones and laptops are used for wirelessly sharing photos and documents, playing games, browsing websites, and viewing multimedia, for example. This work describes a proposal for both desktop and mobile applications that use Identity-Based Cryptography (IBC) to protect communications between smart wireless devices in the home. It combines the use of IBC for Wi-Fi and Bluetooth communication, with the promising Near Field Communication (NFC) technology for secure authentication. The proposed scheme involves NFC pairing to establish as public key a piece of information linked to the device, such as a phone number or an IP address. In this way, such information can be then used in an IBC scheme for peer-to-peer communication. This is a work in progress, but preliminary implementations of prototypes on several mobile platforms have already produced promising results.
|
2109.09415
|
Claudio Cicconetti
|
Claudio Cicconetti and Marco Conti and Andrea Passarella
|
Architecture and Performance Evaluation of Distributed Computation
Offloading in Edge Computing
| null |
Simulation Modelling Practice and Theory. Volume 101, May 2020,
102007
|
10.1016/j.simpat.2019.102007
| null |
cs.DC cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Edge computing is an emerging paradigm to enable low-latency applications,
like mobile augmented reality, because it takes the computation on processing
devices that are closer to the users. On the other hand, the need for highly
scalable execution of stateless tasks for cloud systems is driving the
definition of new technologies based on serverless computing. In this paper, we
propose a novel architecture where the two converge to enable low-latency
applications: this is achieved by offloading short-lived stateless tasks from
the user terminals to edge nodes. Furthermore, we design a distributed
algorithm that tackles the research challenge of selecting the best executor,
based on real-time measurements and simple, yet effective, prediction
algorithms. Finally, we describe a new performance evaluation framework
specifically designed for an accurate assessment of algorithms and protocols in
edge computing environments, where the nodes may have very heterogeneous
networking and processing capabilities. The proposed framework relies on the
use of real components on lightweight virtualization mixed with simulated
computation and is well-suited to the analysis of several applications and
network environments. Using our framework, we evaluate our proposed
architecture and algorithms in small- and large-scale edge computing scenarios,
showing that our solution achieves similar or better delay performance than a
centralized solution, with far less network utilization.
|
[
{
"created": "Mon, 20 Sep 2021 10:31:32 GMT",
"version": "v1"
}
] |
2021-09-21
|
[
[
"Cicconetti",
"Claudio",
""
],
[
"Conti",
"Marco",
""
],
[
"Passarella",
"Andrea",
""
]
] |
Edge computing is an emerging paradigm to enable low-latency applications, like mobile augmented reality, because it takes the computation on processing devices that are closer to the users. On the other hand, the need for highly scalable execution of stateless tasks for cloud systems is driving the definition of new technologies based on serverless computing. In this paper, we propose a novel architecture where the two converge to enable low-latency applications: this is achieved by offloading short-lived stateless tasks from the user terminals to edge nodes. Furthermore, we design a distributed algorithm that tackles the research challenge of selecting the best executor, based on real-time measurements and simple, yet effective, prediction algorithms. Finally, we describe a new performance evaluation framework specifically designed for an accurate assessment of algorithms and protocols in edge computing environments, where the nodes may have very heterogeneous networking and processing capabilities. The proposed framework relies on the use of real components on lightweight virtualization mixed with simulated computation and is well-suited to the analysis of several applications and network environments. Using our framework, we evaluate our proposed architecture and algorithms in small- and large-scale edge computing scenarios, showing that our solution achieves similar or better delay performance than a centralized solution, with far less network utilization.
|
2404.14246
|
Adam Janovsky
|
Adam Janovsky, {\L}ukasz Chmielewski, Petr Svenda, Jan Jancar, Vashek
Matyas
|
Chain of trust: Unraveling references among Common Criteria certified
products
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With 5394 security certificates of IT products and systems, the Common
Criteria for Information Technology Security Evaluation have bred an ecosystem
entangled with various kind of relations between the certified products. Yet,
the prevalence and nature of dependencies among Common Criteria certified
products remains largely unexplored. This study devises a novel method for
building the graph of references among the Common Criteria certified products,
determining the different contexts of references with a supervised
machine-learning algorithm, and measuring how often the references constitute
actual dependencies between the certified products. With the help of the
resulting reference graph, this work identifies just a dozen of certified
components that are relied on by at least 10% of the whole ecosystem -- making
them a prime target for malicious actors. The impact of their compromise is
assessed and potentially problematic references to archived products are
discussed.
|
[
{
"created": "Mon, 22 Apr 2024 14:59:35 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2024 06:13:15 GMT",
"version": "v2"
}
] |
2024-04-26
|
[
[
"Janovsky",
"Adam",
""
],
[
"Chmielewski",
"Łukasz",
""
],
[
"Svenda",
"Petr",
""
],
[
"Jancar",
"Jan",
""
],
[
"Matyas",
"Vashek",
""
]
] |
With 5394 security certificates of IT products and systems, the Common Criteria for Information Technology Security Evaluation have bred an ecosystem entangled with various kind of relations between the certified products. Yet, the prevalence and nature of dependencies among Common Criteria certified products remains largely unexplored. This study devises a novel method for building the graph of references among the Common Criteria certified products, determining the different contexts of references with a supervised machine-learning algorithm, and measuring how often the references constitute actual dependencies between the certified products. With the help of the resulting reference graph, this work identifies just a dozen of certified components that are relied on by at least 10% of the whole ecosystem -- making them a prime target for malicious actors. The impact of their compromise is assessed and potentially problematic references to archived products are discussed.
|
2408.05636
|
Jacob Christopher
|
Jacob K Christopher, Brian R Bartoldson, Bhavya Kailkhura, Ferdinando
Fioretto
|
Speculative Diffusion Decoding: Accelerating Language Generation through
Diffusion
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speculative decoding has emerged as a widely adopted method to accelerate
large language model inference without sacrificing the quality of the model
outputs. While this technique has facilitated notable speed improvements by
enabling parallel sequence verification, its efficiency remains inherently
limited by the reliance on incremental token generation in existing draft
models. To overcome this limitation, this paper proposes an adaptation of
speculative decoding which uses discrete diffusion models to generate draft
sequences. This allows parallelization of both the drafting and verification
steps, providing significant speed-ups to the inference process. Our proposed
approach, \textit{Speculative Diffusion Decoding (SpecDiff)}, is validated on
standard language generation benchmarks and empirically demonstrated to provide
a \textbf{up to 8.7x speed-up over standard generation processes and up to 2.5x
speed-up over existing speculative decoding approaches.}
|
[
{
"created": "Sat, 10 Aug 2024 21:24:25 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Christopher",
"Jacob K",
""
],
[
"Bartoldson",
"Brian R",
""
],
[
"Kailkhura",
"Bhavya",
""
],
[
"Fioretto",
"Ferdinando",
""
]
] |
Speculative decoding has emerged as a widely adopted method to accelerate large language model inference without sacrificing the quality of the model outputs. While this technique has facilitated notable speed improvements by enabling parallel sequence verification, its efficiency remains inherently limited by the reliance on incremental token generation in existing draft models. To overcome this limitation, this paper proposes an adaptation of speculative decoding which uses discrete diffusion models to generate draft sequences. This allows parallelization of both the drafting and verification steps, providing significant speed-ups to the inference process. Our proposed approach, \textit{Speculative Diffusion Decoding (SpecDiff)}, is validated on standard language generation benchmarks and empirically demonstrated to provide a \textbf{up to 8.7x speed-up over standard generation processes and up to 2.5x speed-up over existing speculative decoding approaches.}
|
2403.19918
|
Luke Rowe
|
Luke Rowe, Roger Girgis, Anthony Gosselin, Bruno Carrez, Florian
Golemo, Felix Heide, Liam Paull, Christopher Pal
|
CtRL-Sim: Reactive and Controllable Driving Agents with Offline
Reinforcement Learning
|
21 pages, 9 figures, 8 tables
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluating autonomous vehicle stacks (AVs) in simulation typically involves
replaying driving logs from real-world recorded traffic. However, agents
replayed from offline data are not reactive and hard to intuitively control.
Existing approaches address these challenges by proposing methods that rely on
heuristics or generative models of real-world data but these approaches either
lack realism or necessitate costly iterative sampling procedures to control the
generated behaviours. In this work, we take an alternative approach and propose
CtRL-Sim, a method that leverages return-conditioned offline reinforcement
learning to efficiently generate reactive and controllable traffic agents.
Specifically, we process real-world driving data through a physics-enhanced
Nocturne simulator to generate a diverse offline reinforcement learning
dataset, annotated with various reward terms. With this dataset, we train a
return-conditioned multi-agent behaviour model that allows for fine-grained
manipulation of agent behaviours by modifying the desired returns for the
various reward components. This capability enables the generation of a wide
range of driving behaviours beyond the scope of the initial dataset, including
adversarial behaviours. We demonstrate that CtRL-Sim can generate diverse and
realistic safety-critical scenarios while providing fine-grained control over
agent behaviours.
|
[
{
"created": "Fri, 29 Mar 2024 02:10:19 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 21:47:41 GMT",
"version": "v2"
}
] |
2024-06-18
|
[
[
"Rowe",
"Luke",
""
],
[
"Girgis",
"Roger",
""
],
[
"Gosselin",
"Anthony",
""
],
[
"Carrez",
"Bruno",
""
],
[
"Golemo",
"Florian",
""
],
[
"Heide",
"Felix",
""
],
[
"Paull",
"Liam",
""
],
[
"Pal",
"Christopher",
""
]
] |
Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existing approaches address these challenges by proposing methods that rely on heuristics or generative models of real-world data but these approaches either lack realism or necessitate costly iterative sampling procedures to control the generated behaviours. In this work, we take an alternative approach and propose CtRL-Sim, a method that leverages return-conditioned offline reinforcement learning to efficiently generate reactive and controllable traffic agents. Specifically, we process real-world driving data through a physics-enhanced Nocturne simulator to generate a diverse offline reinforcement learning dataset, annotated with various reward terms. With this dataset, we train a return-conditioned multi-agent behaviour model that allows for fine-grained manipulation of agent behaviours by modifying the desired returns for the various reward components. This capability enables the generation of a wide range of driving behaviours beyond the scope of the initial dataset, including adversarial behaviours. We demonstrate that CtRL-Sim can generate diverse and realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
|
2203.10629
|
Matthew Sparr
|
Matthew Sparr
|
Explicit User Manipulation in Reinforcement Learning Based Recommender
Systems
| null | null | null | null |
cs.LG cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Recommender systems are highly prevalent in the modern world due to their
value to both users and platforms and services that employ them. Generally,
they can improve the user experience and help to increase satisfaction, but
they do not come without risks. One such risk is that of their effect on users
and their ability to play an active role in shaping user preferences. This risk
is more significant for reinforcement learning based recommender systems. These
are capable of learning for instance, how recommended content shown to a user
today may tamper that user's preference for other content recommended in the
future. Reinforcement learning based recommendation systems can thus implicitly
learn to influence users if that means maximizing clicks, engagement, or
consumption. On social news and media platforms, in particular, this type of
behavior is cause for alarm. Social media undoubtedly plays a role in public
opinion and has been shown to be a contributing factor to increased political
polarization. Recommender systems on such platforms, therefore, have great
potential to influence users in undesirable ways. However, it may also be
possible for this form of manipulation to be used intentionally. With
advancements in political opinion dynamics modeling and larger collections of
user data, explicit user manipulation in which the beliefs and opinions of
users are tailored towards a certain end emerges as a significant concern in
reinforcement learning based recommender systems.
|
[
{
"created": "Sun, 20 Mar 2022 19:03:18 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"Sparr",
"Matthew",
""
]
] |
Recommender systems are highly prevalent in the modern world due to their value to both users and platforms and services that employ them. Generally, they can improve the user experience and help to increase satisfaction, but they do not come without risks. One such risk is that of their effect on users and their ability to play an active role in shaping user preferences. This risk is more significant for reinforcement learning based recommender systems. These are capable of learning for instance, how recommended content shown to a user today may tamper that user's preference for other content recommended in the future. Reinforcement learning based recommendation systems can thus implicitly learn to influence users if that means maximizing clicks, engagement, or consumption. On social news and media platforms, in particular, this type of behavior is cause for alarm. Social media undoubtedly plays a role in public opinion and has been shown to be a contributing factor to increased political polarization. Recommender systems on such platforms, therefore, have great potential to influence users in undesirable ways. However, it may also be possible for this form of manipulation to be used intentionally. With advancements in political opinion dynamics modeling and larger collections of user data, explicit user manipulation in which the beliefs and opinions of users are tailored towards a certain end emerges as a significant concern in reinforcement learning based recommender systems.
|
2207.09592
|
Nyteisha Bookert
|
Nyteisha Bookert (1), May Almousa (1), Mohd Anwar (1) ((1) North
Carolina Agricultural and Technical State University)
|
Inclusive Privacy Design for Older Adults Living in Ambient Assisted
Living
|
5 pages, 1 figure, accepted to Workshop on Inclusive Privacy and
Security (WIPS 2022)
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Ambient assisted living (AAL) environments support independence and quality
of life of older adults However, in an AAL environment, privacy-related issues
(e.g., unawareness, information disclosure, and lack of support) directly
impact older adults and bystanders (e.g., caregivers, service providers, etc.).
We explore the privacy challenges that both older adults and bystanders face in
AAL. We call for inclusive privacy design and recommend following areas of
improvement: consent, notification, and consideration for cultural differences.
|
[
{
"created": "Tue, 19 Jul 2022 23:44:16 GMT",
"version": "v1"
}
] |
2022-07-21
|
[
[
"Bookert",
"Nyteisha",
""
],
[
"Almousa",
"May",
""
],
[
"Anwar",
"Mohd",
""
]
] |
Ambient assisted living (AAL) environments support independence and quality of life of older adults However, in an AAL environment, privacy-related issues (e.g., unawareness, information disclosure, and lack of support) directly impact older adults and bystanders (e.g., caregivers, service providers, etc.). We explore the privacy challenges that both older adults and bystanders face in AAL. We call for inclusive privacy design and recommend following areas of improvement: consent, notification, and consideration for cultural differences.
|
1304.7842
|
Gobithaasan Rudrusamy
|
R.U. Gobithaasan, J.M. Ali, Kenjiro T. Miura
|
The Logarithmic Curvature Graphs of Generalised Cornu Spirals
| null |
2012 Punjab University Journal of Mathematics, 44, Pg.1-8
| null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Generalized Cornu Spiral (GCS) was first proposed by Ali et al. in 1995
[9]. Due to the monotonocity of its curvature function, the surface generated
with GCS segments has been considered as a high quality surface and it has
potential applications in surface design [2]. In this paper, the analysis of
GCS segment is carried out by determining its aesthetic value using the log
curvature Graph (LCG) as proposed by Kanaya et al.[10]. The analysis of LCG
supports the claim that GCS is indeed a generalized aesthetic curve.
|
[
{
"created": "Tue, 30 Apr 2013 03:10:31 GMT",
"version": "v1"
}
] |
2013-05-01
|
[
[
"Gobithaasan",
"R. U.",
""
],
[
"Ali",
"J. M.",
""
],
[
"Miura",
"Kenjiro T.",
""
]
] |
The Generalized Cornu Spiral (GCS) was first proposed by Ali et al. in 1995 [9]. Due to the monotonocity of its curvature function, the surface generated with GCS segments has been considered as a high quality surface and it has potential applications in surface design [2]. In this paper, the analysis of GCS segment is carried out by determining its aesthetic value using the log curvature Graph (LCG) as proposed by Kanaya et al.[10]. The analysis of LCG supports the claim that GCS is indeed a generalized aesthetic curve.
|
2302.08198
|
Francoise Grelaud
|
Patrick S\'egu\'ela, Nathalie Aussenac-Gilles (IRIT-MELODI, CNRS)
|
Un mod{\`e}le de base de connaissances terminologiques
|
in French language. 2{\`e}mes Rencontres Terminologie et Intelligence
Artificielle (TIA 1997), Groupe de recherche TIA : Terminologie et
intelligence artificielle, UT2 LeMirail, Toulouse, Apr 1997, Toulouse, France
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper, we argue that Terminological Knowledge Bases (TKB) are
all the more useful for addressing various needs as they do not fulfill formal
criteria. Moreover, they intend to clarify the terminology of a given domain by
illustrating term uses in various contexts. Thus we designed a TKB structure
including 3 linked features: terms, concepts and texts, that present the
peculiar use of each term in the domain. Note that concepts are represented
into frames whose non-formal description is standardized. Associated with this
structure, we defined modeling criteria at the conceptual level. Finaly, we
discuss the situation of TKB with regard to ontologies, and the use of TKB for
the development of AI systems.
|
[
{
"created": "Thu, 16 Feb 2023 10:28:23 GMT",
"version": "v1"
}
] |
2023-02-17
|
[
[
"Séguéla",
"Patrick",
"",
"IRIT-MELODI, CNRS"
],
[
"Aussenac-Gilles",
"Nathalie",
"",
"IRIT-MELODI, CNRS"
]
] |
In the present paper, we argue that Terminological Knowledge Bases (TKB) are all the more useful for addressing various needs as they do not fulfill formal criteria. Moreover, they intend to clarify the terminology of a given domain by illustrating term uses in various contexts. Thus we designed a TKB structure including 3 linked features: terms, concepts and texts, that present the peculiar use of each term in the domain. Note that concepts are represented into frames whose non-formal description is standardized. Associated with this structure, we defined modeling criteria at the conceptual level. Finaly, we discuss the situation of TKB with regard to ontologies, and the use of TKB for the development of AI systems.
|
2112.01651
|
Chuanzheng Sun
|
Zhiyuan Liu, Chuanzheng Sun, Yuxin Jiang, Shiqi Jiang, Mei Ming
|
Multi-modal application: Image Memes Generation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Meme is an interesting word. Internet memes offer unique insights into the
changes in our perception of the world, the media and our own lives. If you
surf the Internet for long enough, you will see it somewhere on the Internet.
With the rise of social media platforms and convenient image dissemination,
Image Meme has gained fame. Image memes have become a kind of pop culture and
they play an important role in communication over social media, blogs, and open
messages. With the development of artificial intelligence and the widespread
use of deep learning, Natural Language Processing (NLP) and Computer Vision
(CV) can also be used to solve more problems in life, including meme
generation. An Internet meme commonly takes the form of an image and is created
by combining a meme template (image) and a caption (natural language sentence).
In our project, we propose an end-to-end encoder-decoder architecture meme
generator. For a given input sentence, we use the Meme template selection model
to determine the emotion it expresses and select the image template. Then
generate captions and memes through to the meme caption generator. Code and
models are available at github
|
[
{
"created": "Fri, 3 Dec 2021 00:17:44 GMT",
"version": "v1"
}
] |
2021-12-06
|
[
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Chuanzheng",
""
],
[
"Jiang",
"Yuxin",
""
],
[
"Jiang",
"Shiqi",
""
],
[
"Ming",
"Mei",
""
]
] |
Meme is an interesting word. Internet memes offer unique insights into the changes in our perception of the world, the media and our own lives. If you surf the Internet for long enough, you will see it somewhere on the Internet. With the rise of social media platforms and convenient image dissemination, Image Meme has gained fame. Image memes have become a kind of pop culture and they play an important role in communication over social media, blogs, and open messages. With the development of artificial intelligence and the widespread use of deep learning, Natural Language Processing (NLP) and Computer Vision (CV) can also be used to solve more problems in life, including meme generation. An Internet meme commonly takes the form of an image and is created by combining a meme template (image) and a caption (natural language sentence). In our project, we propose an end-to-end encoder-decoder architecture meme generator. For a given input sentence, we use the Meme template selection model to determine the emotion it expresses and select the image template. Then generate captions and memes through to the meme caption generator. Code and models are available at github
|
2404.11328
|
Lorenzo Zaniboni
|
Yatish Pachigolla, Lorenzo Zaniboni, Mahdi Mahvari
|
Channel Estimation in TDD Cell-free Scenario using OTFS Modulation
|
Submitted to IEEE for possible pubblications
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Channel estimation techniques for orthogonal time frequency space (OTFS)
modulation scheme are investigated. The orthogonal matching pursuit algorithm
is investigated with and without side channel information and an efficient data
placement is proposed alongside the pilot in the multi-user scenario based on
impulse pilot-based estimation. Finally, the algorithms are compared in
different multi-user scenarios with numerical results.
|
[
{
"created": "Wed, 17 Apr 2024 12:41:01 GMT",
"version": "v1"
}
] |
2024-04-18
|
[
[
"Pachigolla",
"Yatish",
""
],
[
"Zaniboni",
"Lorenzo",
""
],
[
"Mahvari",
"Mahdi",
""
]
] |
Channel estimation techniques for orthogonal time frequency space (OTFS) modulation scheme are investigated. The orthogonal matching pursuit algorithm is investigated with and without side channel information and an efficient data placement is proposed alongside the pilot in the multi-user scenario based on impulse pilot-based estimation. Finally, the algorithms are compared in different multi-user scenarios with numerical results.
|
1812.00825
|
Po-Hsuan Cameron Chen
|
Po-Hsuan Cameron Chen, Krishna Gadepalli, Robert MacDonald, Yun Liu,
Kunal Nagpal, Timo Kohlberger, Jeffrey Dean, Greg S. Corrado, Jason D. Hipp,
Martin C. Stumpe
|
Microscope 2.0: An Augmented Reality Microscope with Real-time
Artificial Intelligence Integration
| null |
Nature Medicine (2019)
|
10.1038/s41591-019-0539-7
| null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The brightfield microscope is instrumental in the visual examination of both
biological and physical samples at sub-millimeter scales. One key clinical
application has been in cancer histopathology, where the microscopic assessment
of the tissue samples is used for the diagnosis and staging of cancer and thus
guides clinical therapy. However, the interpretation of these samples is
inherently subjective, resulting in significant diagnostic variability.
Moreover, in many regions of the world, access to pathologists is severely
limited due to lack of trained personnel. In this regard, Artificial
Intelligence (AI) based tools promise to improve the access and quality of
healthcare. However, despite significant advances in AI research, integration
of these tools into real-world cancer diagnosis workflows remains challenging
because of the costs of image digitization and difficulties in deploying AI
solutions. Here we propose a cost-effective solution to the integration of AI:
the Augmented Reality Microscope (ARM). The ARM overlays AI-based information
onto the current view of the sample through the optical pathway in real-time,
enabling seamless integration of AI into the regular microscopy workflow. We
demonstrate the utility of ARM in the detection of lymph node metastases in
breast cancer and the identification of prostate cancer with a latency that
supports real-time workflows. We anticipate that ARM will remove barriers
towards the use of AI in microscopic analysis and thus improve the accuracy and
efficiency of cancer diagnosis. This approach is applicable to other microscopy
tasks and AI algorithms in the life sciences and beyond.
|
[
{
"created": "Wed, 21 Nov 2018 21:02:50 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Dec 2018 05:36:36 GMT",
"version": "v2"
}
] |
2020-06-03
|
[
[
"Chen",
"Po-Hsuan Cameron",
""
],
[
"Gadepalli",
"Krishna",
""
],
[
"MacDonald",
"Robert",
""
],
[
"Liu",
"Yun",
""
],
[
"Nagpal",
"Kunal",
""
],
[
"Kohlberger",
"Timo",
""
],
[
"Dean",
"Jeffrey",
""
],
[
"Corrado",
"Greg S.",
""
],
[
"Hipp",
"Jason D.",
""
],
[
"Stumpe",
"Martin C.",
""
]
] |
The brightfield microscope is instrumental in the visual examination of both biological and physical samples at sub-millimeter scales. One key clinical application has been in cancer histopathology, where the microscopic assessment of the tissue samples is used for the diagnosis and staging of cancer and thus guides clinical therapy. However, the interpretation of these samples is inherently subjective, resulting in significant diagnostic variability. Moreover, in many regions of the world, access to pathologists is severely limited due to lack of trained personnel. In this regard, Artificial Intelligence (AI) based tools promise to improve the access and quality of healthcare. However, despite significant advances in AI research, integration of these tools into real-world cancer diagnosis workflows remains challenging because of the costs of image digitization and difficulties in deploying AI solutions. Here we propose a cost-effective solution to the integration of AI: the Augmented Reality Microscope (ARM). The ARM overlays AI-based information onto the current view of the sample through the optical pathway in real-time, enabling seamless integration of AI into the regular microscopy workflow. We demonstrate the utility of ARM in the detection of lymph node metastases in breast cancer and the identification of prostate cancer with a latency that supports real-time workflows. We anticipate that ARM will remove barriers towards the use of AI in microscopic analysis and thus improve the accuracy and efficiency of cancer diagnosis. This approach is applicable to other microscopy tasks and AI algorithms in the life sciences and beyond.
|
1607.06359
|
Fatema Akbar
|
Fatema Akbar and Ingmar Weber
|
#Sleep_as_Android: Feasibility of Using Sleep Logs on Twitter for Sleep
Studies
|
This is a preprint of an article accepted to appear at IEEE ICHI 2016
| null | null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media enjoys a growing popularity as a platform to seek and share
personal health information. For sleep studies using data from social media,
most researchers focused on inferring sleep-related artifacts from
self-reported anecdotal pointers to sleep patterns or issues such as insomnia.
The data shared by "quantified-selfers" on social media presents an opportunity
to study more quantitative and objective measures of sleep. We propose and
validate the approach of collecting and analyzing sleep logs that are generated
and shared through a sleep-tracking mobile application. We highlight the value
of this data by combining it with users' social media data. The results provide
a validation of using social media for sleep studies as the collected sleep
data is aligned with sleep data from other sources. The results of combining
social media data with sleep data provide preliminary evidence that higher
social media activity is associated with lower sleep duration and quality.
|
[
{
"created": "Thu, 21 Jul 2016 15:18:27 GMT",
"version": "v1"
}
] |
2016-07-25
|
[
[
"Akbar",
"Fatema",
""
],
[
"Weber",
"Ingmar",
""
]
] |
Social media enjoys a growing popularity as a platform to seek and share personal health information. For sleep studies using data from social media, most researchers focused on inferring sleep-related artifacts from self-reported anecdotal pointers to sleep patterns or issues such as insomnia. The data shared by "quantified-selfers" on social media presents an opportunity to study more quantitative and objective measures of sleep. We propose and validate the approach of collecting and analyzing sleep logs that are generated and shared through a sleep-tracking mobile application. We highlight the value of this data by combining it with users' social media data. The results provide a validation of using social media for sleep studies as the collected sleep data is aligned with sleep data from other sources. The results of combining social media data with sleep data provide preliminary evidence that higher social media activity is associated with lower sleep duration and quality.
|
1607.04747
|
Chao Lan
|
Chao Lan, Yuhao Yang, Xiaoli Li, Bo Luo, Jun Huan
|
Learning Social Circles in Ego Networks based on Multi-View Social
Graphs
|
This paper has been withdrawn by the author due to its current
unsatisfactory quality
| null | null | null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In social network analysis, automatic social circle detection in ego-networks
is becoming a fundamental and important task, with many potential applications
such as user privacy protection or interest group recommendation. So far, most
studies have focused on addressing two questions, namely, how to detect
overlapping circles and how to detect circles using a combination of network
structure and network node attributes. This paper asks an orthogonal research
question, that is, how to detect circles based on network structures that are
(usually) described by multiple views? Our investigation begins with crawling
ego-networks from Twitter and employing classic techniques to model their
structures by six views, including user relationships, user interactions and
user content. We then apply both standard and our modified multi-view spectral
clustering techniques to detect social circles in these ego-networks. Based on
extensive automatic and manual experimental evaluations, we deliver two major
findings: first, multi-view clustering techniques perform better than common
single-view clustering techniques, which only use one view or naively integrate
all views for detection, second, the standard multi-view clustering technique
is less robust than our modified technique, which selectively transfers
information across views based on an assumption that sparse network structures
are (potentially) incomplete. In particular, the second finding makes us
believe a direct application of standard clustering on potentially incomplete
networks may yield biased results. We lightly examine this issue in theory,
where we derive an upper bound for such bias by integrating theories of
spectral clustering and matrix perturbation, and discuss how it may be affected
by several network characteristics.
|
[
{
"created": "Sat, 16 Jul 2016 15:00:40 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Dec 2016 02:27:15 GMT",
"version": "v2"
}
] |
2016-12-28
|
[
[
"Lan",
"Chao",
""
],
[
"Yang",
"Yuhao",
""
],
[
"Li",
"Xiaoli",
""
],
[
"Luo",
"Bo",
""
],
[
"Huan",
"Jun",
""
]
] |
In social network analysis, automatic social circle detection in ego-networks is becoming a fundamental and important task, with many potential applications such as user privacy protection or interest group recommendation. So far, most studies have focused on addressing two questions, namely, how to detect overlapping circles and how to detect circles using a combination of network structure and network node attributes. This paper asks an orthogonal research question, that is, how to detect circles based on network structures that are (usually) described by multiple views? Our investigation begins with crawling ego-networks from Twitter and employing classic techniques to model their structures by six views, including user relationships, user interactions and user content. We then apply both standard and our modified multi-view spectral clustering techniques to detect social circles in these ego-networks. Based on extensive automatic and manual experimental evaluations, we deliver two major findings: first, multi-view clustering techniques perform better than common single-view clustering techniques, which only use one view or naively integrate all views for detection, second, the standard multi-view clustering technique is less robust than our modified technique, which selectively transfers information across views based on an assumption that sparse network structures are (potentially) incomplete. In particular, the second finding makes us believe a direct application of standard clustering on potentially incomplete networks may yield biased results. We lightly examine this issue in theory, where we derive an upper bound for such bias by integrating theories of spectral clustering and matrix perturbation, and discuss how it may be affected by several network characteristics.
|
2304.10263
|
Jianhui Li
|
Jianhui Li, Jianmin Li, Haoji Zhang, Shilong Liu, Zhengyi Wang, Zihao
Xiao, Kaiwen Zheng, Jun Zhu
|
PREIM3D: 3D Consistent Precise Image Attribute Editing from a Single
Image
|
20 pages, 21 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the 3D-aware image attribute editing problem in this paper, which
has wide applications in practice. Recent methods solved the problem by
training a shared encoder to map images into a 3D generator's latent space or
by per-image latent code optimization and then edited images in the latent
space. Despite their promising results near the input view, they still suffer
from the 3D inconsistency of produced images at large camera poses and
imprecise image attribute editing, like affecting unspecified attributes during
editing. For more efficient image inversion, we train a shared encoder for all
images. To alleviate 3D inconsistency at large camera poses, we propose two
novel methods, an alternating training scheme and a multi-view identity loss,
to maintain 3D consistency and subject identity. As for imprecise image
editing, we attribute the problem to the gap between the latent space of real
images and that of generated images. We compare the latent space and inversion
manifold of GAN models and demonstrate that editing in the inversion manifold
can achieve better results in both quantitative and qualitative evaluations.
Extensive experiments show that our method produces more 3D consistent images
and achieves more precise image editing than previous work. Source code and
pretrained models can be found on our project page:
https://mybabyyh.github.io/Preim3D/
|
[
{
"created": "Thu, 20 Apr 2023 12:33:56 GMT",
"version": "v1"
}
] |
2023-04-21
|
[
[
"Li",
"Jianhui",
""
],
[
"Li",
"Jianmin",
""
],
[
"Zhang",
"Haoji",
""
],
[
"Liu",
"Shilong",
""
],
[
"Wang",
"Zhengyi",
""
],
[
"Xiao",
"Zihao",
""
],
[
"Zheng",
"Kaiwen",
""
],
[
"Zhu",
"Jun",
""
]
] |
We study the 3D-aware image attribute editing problem in this paper, which has wide applications in practice. Recent methods solved the problem by training a shared encoder to map images into a 3D generator's latent space or by per-image latent code optimization and then edited images in the latent space. Despite their promising results near the input view, they still suffer from the 3D inconsistency of produced images at large camera poses and imprecise image attribute editing, like affecting unspecified attributes during editing. For more efficient image inversion, we train a shared encoder for all images. To alleviate 3D inconsistency at large camera poses, we propose two novel methods, an alternating training scheme and a multi-view identity loss, to maintain 3D consistency and subject identity. As for imprecise image editing, we attribute the problem to the gap between the latent space of real images and that of generated images. We compare the latent space and inversion manifold of GAN models and demonstrate that editing in the inversion manifold can achieve better results in both quantitative and qualitative evaluations. Extensive experiments show that our method produces more 3D consistent images and achieves more precise image editing than previous work. Source code and pretrained models can be found on our project page: https://mybabyyh.github.io/Preim3D/
|
2404.11118
|
Xueyuan Gong
|
Xueyuan Gong, Yain-whar Si, Zheng Zhang, Xiaochen Yuan, Ke Wang,
Xinyuan Zhang, Cong Lin, and Xiaoxiang Liu
|
MHLR: Moving Haar Learning Rate Scheduler for Large-scale Face
Recognition Training with One GPU
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face recognition (FR) has seen significant advancements due to the
utilization of large-scale datasets. Training deep FR models on large-scale
datasets with multiple GPUs is now a common practice. In fact, computing power
has evolved into a foundational and indispensable resource in the area of deep
learning. It is nearly impossible to train a deep FR model without holding
adequate hardware resources. Recognizing this challenge, some FR approaches
have started exploring ways to reduce the time complexity of the
fully-connected layer in FR models. Unlike other approaches, this paper
introduces a simple yet highly effective approach, Moving Haar Learning Rate
(MHLR) scheduler, for scheduling the learning rate promptly and accurately in
the training process. MHLR supports large-scale FR training with only one GPU,
which is able to accelerate the model to 1/4 of its original training time
without sacrificing more than 1% accuracy. More specifically, MHLR only needs
$30$ hours to train the model ResNet100 on the dataset WebFace12M containing
more than 12M face images with 0.6M identities. Extensive experiments validate
the efficiency and effectiveness of MHLR.
|
[
{
"created": "Wed, 17 Apr 2024 07:06:22 GMT",
"version": "v1"
}
] |
2024-04-18
|
[
[
"Gong",
"Xueyuan",
""
],
[
"Si",
"Yain-whar",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Yuan",
"Xiaochen",
""
],
[
"Wang",
"Ke",
""
],
[
"Zhang",
"Xinyuan",
""
],
[
"Lin",
"Cong",
""
],
[
"Liu",
"Xiaoxiang",
""
]
] |
Face recognition (FR) has seen significant advancements due to the utilization of large-scale datasets. Training deep FR models on large-scale datasets with multiple GPUs is now a common practice. In fact, computing power has evolved into a foundational and indispensable resource in the area of deep learning. It is nearly impossible to train a deep FR model without holding adequate hardware resources. Recognizing this challenge, some FR approaches have started exploring ways to reduce the time complexity of the fully-connected layer in FR models. Unlike other approaches, this paper introduces a simple yet highly effective approach, Moving Haar Learning Rate (MHLR) scheduler, for scheduling the learning rate promptly and accurately in the training process. MHLR supports large-scale FR training with only one GPU, which is able to accelerate the model to 1/4 of its original training time without sacrificing more than 1% accuracy. More specifically, MHLR only needs $30$ hours to train the model ResNet100 on the dataset WebFace12M containing more than 12M face images with 0.6M identities. Extensive experiments validate the efficiency and effectiveness of MHLR.
|
2112.15072
|
Sami Sarsa
|
Sami Sarsa, Juho Leinonen, Arto Hellas
|
Empirical Evaluation of Deep Learning Models for Knowledge Tracing: Of
Hyperparameters and Metrics on Performance and Replicability
|
70 pages, 8 figures, submitted to JEDM, added acknowledgments,
modified after first round of review
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We review and evaluate a body of deep learning knowledge tracing (DLKT)
models with openly available and widely-used data sets, and with a novel data
set of students learning to program. The evaluated knowledge tracing models
include Vanilla-DKT, two Long Short-Term Memory Deep Knowledge Tracing
(LSTM-DKT) variants, two Dynamic Key-Value Memory Network (DKVMN) variants, and
Self-Attentive Knowledge Tracing (SAKT). As baselines, we evaluate simple
non-learning models, logistic regression and Bayesian Knowledge Tracing (BKT).
To evaluate how different aspects of DLKT models influence model performance,
we test input and output layer variations found in the compared models that are
independent of the main architectures. We study maximum attempt count options,
including filtering out long attempt sequences, that have been implicitly and
explicitly used in prior studies. We contrast the observed performance
variations against variations from non-model properties such as randomness and
hardware. Performance of models is assessed using multiple metrics, whereby we
also contrast the impact of the choice of metric on model performance. The key
contributions of this work are: Evidence that DLKT models generally outperform
more traditional models, but not necessarily by much and not always; Evidence
that even simple baselines with little to no predictive value may outperform
DLKT models, especially in terms of accuracy -- highlighting importance of
selecting proper baselines for comparison; Disambiguation of properties that
affect performance in DLKT models including metric choice, input and output
layer variations, common hyperparameters, random seeding and hardware;
Discussion of issues in replicability when evaluating DLKT models, including
discrepancies in prior reported results and methodology. Model implementations,
evaluation code, and data are published as a part of this work.
|
[
{
"created": "Thu, 30 Dec 2021 14:19:27 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jan 2022 20:15:21 GMT",
"version": "v2"
},
{
"created": "Sat, 2 Apr 2022 20:28:13 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Apr 2022 10:41:33 GMT",
"version": "v4"
}
] |
2022-04-06
|
[
[
"Sarsa",
"Sami",
""
],
[
"Leinonen",
"Juho",
""
],
[
"Hellas",
"Arto",
""
]
] |
We review and evaluate a body of deep learning knowledge tracing (DLKT) models with openly available and widely-used data sets, and with a novel data set of students learning to program. The evaluated knowledge tracing models include Vanilla-DKT, two Long Short-Term Memory Deep Knowledge Tracing (LSTM-DKT) variants, two Dynamic Key-Value Memory Network (DKVMN) variants, and Self-Attentive Knowledge Tracing (SAKT). As baselines, we evaluate simple non-learning models, logistic regression and Bayesian Knowledge Tracing (BKT). To evaluate how different aspects of DLKT models influence model performance, we test input and output layer variations found in the compared models that are independent of the main architectures. We study maximum attempt count options, including filtering out long attempt sequences, that have been implicitly and explicitly used in prior studies. We contrast the observed performance variations against variations from non-model properties such as randomness and hardware. Performance of models is assessed using multiple metrics, whereby we also contrast the impact of the choice of metric on model performance. The key contributions of this work are: Evidence that DLKT models generally outperform more traditional models, but not necessarily by much and not always; Evidence that even simple baselines with little to no predictive value may outperform DLKT models, especially in terms of accuracy -- highlighting importance of selecting proper baselines for comparison; Disambiguation of properties that affect performance in DLKT models including metric choice, input and output layer variations, common hyperparameters, random seeding and hardware; Discussion of issues in replicability when evaluating DLKT models, including discrepancies in prior reported results and methodology. Model implementations, evaluation code, and data are published as a part of this work.
|
1906.06240
|
Mario Almeida
|
Mario Almeida, Liang Wang, Jeremy Blackburn, Konstantina Papagiannaki,
Jon Crowcroft
|
Diffusing Your Mobile Apps: Extending In-Network Function Virtualization
to Mobile Function Offloading
| null | null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the huge disparity between the limited battery capacity of user
devices and the ever-growing energy demands of modern mobile apps, we propose
INFv. It is the first offloading system able to cache, migrate and dynamically
execute on demand functionality from mobile devices in ISP networks. It aims to
bridge this gap by extending the promising NFV paradigm to mobile applications
in order to exploit in-network resources. In this paper, we present the overall
design, state-of-the-art technologies adopted, and various engineering details
in the INFv system. We also carefully study the deployment configurations by
investigating over 20K Google Play apps, as well as thorough evaluations with
realistic settings. In addition to a significant improvement in battery life
(up to 6.9x energy reduction) and execution time (up to 4x faster), INFv has
two distinct advantages over previous systems: 1) a non-intrusive offloading
mechanism transparent to existing apps; 2) an inherent framework support to
effectively balance computation load and exploit the proximity of in-network
resources. Both advantages together enable a scalable and incremental
deployment of computation offloading framework in practical ISPs' networks.
|
[
{
"created": "Fri, 14 Jun 2019 15:12:41 GMT",
"version": "v1"
}
] |
2019-06-17
|
[
[
"Almeida",
"Mario",
""
],
[
"Wang",
"Liang",
""
],
[
"Blackburn",
"Jeremy",
""
],
[
"Papagiannaki",
"Konstantina",
""
],
[
"Crowcroft",
"Jon",
""
]
] |
Motivated by the huge disparity between the limited battery capacity of user devices and the ever-growing energy demands of modern mobile apps, we propose INFv. It is the first offloading system able to cache, migrate and dynamically execute on demand functionality from mobile devices in ISP networks. It aims to bridge this gap by extending the promising NFV paradigm to mobile applications in order to exploit in-network resources. In this paper, we present the overall design, state-of-the-art technologies adopted, and various engineering details in the INFv system. We also carefully study the deployment configurations by investigating over 20K Google Play apps, as well as thorough evaluations with realistic settings. In addition to a significant improvement in battery life (up to 6.9x energy reduction) and execution time (up to 4x faster), INFv has two distinct advantages over previous systems: 1) a non-intrusive offloading mechanism transparent to existing apps; 2) an inherent framework support to effectively balance computation load and exploit the proximity of in-network resources. Both advantages together enable a scalable and incremental deployment of computation offloading framework in practical ISPs' networks.
|
2010.00757
|
Zhe Jiang
|
Zhe Jiang, Marcus Stephen Kirby, Wenchong He, Arpan Man Sainju
|
Deep Learning for Earth Image Segmentation based on Imperfect Polyline
Labels with Annotation Errors
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, deep learning techniques (e.g., U-Net, DeepLab) have
achieved tremendous success in image segmentation. The performance of these
models heavily relies on high-quality ground truth segment labels.
Unfortunately, in many real-world problems, ground truth segment labels often
have geometric annotation errors due to manual annotation mistakes, GPS errors,
or visually interpreting background imagery at a coarse resolution. Such
location errors will significantly impact the training performance of existing
deep learning algorithms. Existing research on label errors either models
ground truth errors in label semantics (assuming label locations to be correct)
or models label location errors with simple square patch shifting. These
methods cannot fully incorporate the geometric properties of label location
errors. To fill the gap, this paper proposes a generic learning framework based
on the EM algorithm to update deep learning model parameters and infer hidden
true label locations simultaneously. Evaluations on a real-world hydrological
dataset in the streamline refinement application show that the proposed
framework outperforms baseline methods in classification accuracy (reducing the
number of false positives by 67% and reducing the number of false negatives by
55%).
|
[
{
"created": "Fri, 2 Oct 2020 02:54:06 GMT",
"version": "v1"
}
] |
2020-10-05
|
[
[
"Jiang",
"Zhe",
""
],
[
"Kirby",
"Marcus Stephen",
""
],
[
"He",
"Wenchong",
""
],
[
"Sainju",
"Arpan Man",
""
]
] |
In recent years, deep learning techniques (e.g., U-Net, DeepLab) have achieved tremendous success in image segmentation. The performance of these models heavily relies on high-quality ground truth segment labels. Unfortunately, in many real-world problems, ground truth segment labels often have geometric annotation errors due to manual annotation mistakes, GPS errors, or visually interpreting background imagery at a coarse resolution. Such location errors will significantly impact the training performance of existing deep learning algorithms. Existing research on label errors either models ground truth errors in label semantics (assuming label locations to be correct) or models label location errors with simple square patch shifting. These methods cannot fully incorporate the geometric properties of label location errors. To fill the gap, this paper proposes a generic learning framework based on the EM algorithm to update deep learning model parameters and infer hidden true label locations simultaneously. Evaluations on a real-world hydrological dataset in the streamline refinement application show that the proposed framework outperforms baseline methods in classification accuracy (reducing the number of false positives by 67% and reducing the number of false negatives by 55%).
|
2203.16792
|
Yinfeng Gao
|
Qichao Zhang, Yinfeng Gao, Yikang Zhang, Youtian Guo, Dawei Ding,
Yunpeng Wang, Peng Sun, Dongbin Zhao
|
TrajGen: Generating Realistic and Diverse Trajectories with Reactive and
Feasible Agent Behaviors for Autonomous Driving
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Realistic and diverse simulation scenarios with reactive and feasible agent
behaviors can be used for validation and verification of self-driving system
performance without relying on expensive and time-consuming real-world testing.
Existing simulators rely on heuristic-based behavior models for background
vehicles, which cannot capture the complex interactive behaviors in real-world
scenarios. To bridge the gap between simulation and the real world, we propose
TrajGen, a two-stage trajectory generation framework, which can capture more
realistic behaviors directly from human demonstration. In particular, TrajGen
consists of the multi-modal trajectory prediction stage and the reinforcement
learning based trajectory modification stage. In the first stage, we propose a
novel auxiliary RouteLoss for the trajectory prediction model to generate
multi-modal diverse trajectories in the drivable area. In the second stage,
reinforcement learning is used to track the predicted trajectories while
avoiding collisions, which can improve the feasibility of generated
trajectories. In addition, we develop a data-driven simulator I-Sim that can be
used to train reinforcement learning models in parallel based on naturalistic
driving data. The vehicle model in I-Sim can guarantee that the generated
trajectories by TrajGen satisfy vehicle kinematic constraints. Finally, we give
comprehensive metrics to evaluate generated trajectories for simulation
scenarios, which shows that TrajGen outperforms either trajectory prediction or
inverse reinforcement learning in terms of fidelity, reactivity, feasibility,
and diversity.
|
[
{
"created": "Thu, 31 Mar 2022 04:48:29 GMT",
"version": "v1"
}
] |
2022-04-01
|
[
[
"Zhang",
"Qichao",
""
],
[
"Gao",
"Yinfeng",
""
],
[
"Zhang",
"Yikang",
""
],
[
"Guo",
"Youtian",
""
],
[
"Ding",
"Dawei",
""
],
[
"Wang",
"Yunpeng",
""
],
[
"Sun",
"Peng",
""
],
[
"Zhao",
"Dongbin",
""
]
] |
Realistic and diverse simulation scenarios with reactive and feasible agent behaviors can be used for validation and verification of self-driving system performance without relying on expensive and time-consuming real-world testing. Existing simulators rely on heuristic-based behavior models for background vehicles, which cannot capture the complex interactive behaviors in real-world scenarios. To bridge the gap between simulation and the real world, we propose TrajGen, a two-stage trajectory generation framework, which can capture more realistic behaviors directly from human demonstration. In particular, TrajGen consists of the multi-modal trajectory prediction stage and the reinforcement learning based trajectory modification stage. In the first stage, we propose a novel auxiliary RouteLoss for the trajectory prediction model to generate multi-modal diverse trajectories in the drivable area. In the second stage, reinforcement learning is used to track the predicted trajectories while avoiding collisions, which can improve the feasibility of generated trajectories. In addition, we develop a data-driven simulator I-Sim that can be used to train reinforcement learning models in parallel based on naturalistic driving data. The vehicle model in I-Sim can guarantee that the generated trajectories by TrajGen satisfy vehicle kinematic constraints. Finally, we give comprehensive metrics to evaluate generated trajectories for simulation scenarios, which shows that TrajGen outperforms either trajectory prediction or inverse reinforcement learning in terms of fidelity, reactivity, feasibility, and diversity.
|
2209.01876
|
Anastasios Giovanidis
|
Anastasios Giovanidis
|
SlateFree: a Model-Free Decomposition for Reinforcement Learning with
Slate Actions
|
12 pages, 9 sub-figures
| null | null | null |
cs.LG cs.AI cs.IR cs.NI cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We consider the problem of sequential recommendations, where at each step an
agent proposes some slate of $N$ distinct items to a user from a much larger
catalog of size $K>>N$. The user has unknown preferences towards the
recommendations and the agent takes sequential actions that optimise (in our
case minimise) some user-related cost, with the help of Reinforcement Learning.
The possible item combinations for a slate is $\binom{K}{N}$, an enormous
number rendering value iteration methods intractable. We prove that the
slate-MDP can actually be decomposed using just $K$ item-related $Q$ functions
per state, which describe the problem in a more compact and efficient way.
Based on this, we propose a novel model-free SARSA and Q-learning algorithm
that performs $N$ parallel iterations per step, without any prior user
knowledge. We call this method \texttt{SlateFree}, i.e. free-of-slates, and we
show numerically that it converges very fast to the exact optimum for arbitrary
user profiles, and that it outperforms alternatives from the literature.
|
[
{
"created": "Mon, 5 Sep 2022 10:15:16 GMT",
"version": "v1"
}
] |
2022-09-07
|
[
[
"Giovanidis",
"Anastasios",
""
]
] |
We consider the problem of sequential recommendations, where at each step an agent proposes some slate of $N$ distinct items to a user from a much larger catalog of size $K>>N$. The user has unknown preferences towards the recommendations and the agent takes sequential actions that optimise (in our case minimise) some user-related cost, with the help of Reinforcement Learning. The possible item combinations for a slate is $\binom{K}{N}$, an enormous number rendering value iteration methods intractable. We prove that the slate-MDP can actually be decomposed using just $K$ item-related $Q$ functions per state, which describe the problem in a more compact and efficient way. Based on this, we propose a novel model-free SARSA and Q-learning algorithm that performs $N$ parallel iterations per step, without any prior user knowledge. We call this method \texttt{SlateFree}, i.e. free-of-slates, and we show numerically that it converges very fast to the exact optimum for arbitrary user profiles, and that it outperforms alternatives from the literature.
|
1910.12363
|
Dan Oneata
|
Dan Oneata, Cosmin George Alexandru, Marius Stanescu, Octavian Pascu,
Alexandru Magan, Adrian Postelnicu, Horia Cucu
|
The Quo Vadis submission at Traffic4cast 2019
|
Extended abstract for the Traffic4cast competition from NeurIPS 2019
| null | null | null |
cs.CV cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We describe the submission of the Quo Vadis team to the Traffic4cast
competition, which was organized as part of the NeurIPS 2019 series of
challenges. Our system consists of a temporal regression module, implemented as
$1\times1$ 2d convolutions, augmented with spatio-temporal biases. We have
found that using biases is a straightforward and efficient way to include
seasonal patterns and to improve the performance of the temporal regression
model. Our implementation obtains a mean squared error of $9.47\times 10^{-3}$
on the test data, placing us on the eight place team-wise. We also present our
attempts at incorporating spatial correlations into the model; however,
contrary to our expectations, adding this type of auxiliary information did not
benefit the main system. Our code is available at
https://github.com/danoneata/traffic4cast.
|
[
{
"created": "Sun, 27 Oct 2019 21:50:30 GMT",
"version": "v1"
}
] |
2019-10-29
|
[
[
"Oneata",
"Dan",
""
],
[
"Alexandru",
"Cosmin George",
""
],
[
"Stanescu",
"Marius",
""
],
[
"Pascu",
"Octavian",
""
],
[
"Magan",
"Alexandru",
""
],
[
"Postelnicu",
"Adrian",
""
],
[
"Cucu",
"Horia",
""
]
] |
We describe the submission of the Quo Vadis team to the Traffic4cast competition, which was organized as part of the NeurIPS 2019 series of challenges. Our system consists of a temporal regression module, implemented as $1\times1$ 2d convolutions, augmented with spatio-temporal biases. We have found that using biases is a straightforward and efficient way to include seasonal patterns and to improve the performance of the temporal regression model. Our implementation obtains a mean squared error of $9.47\times 10^{-3}$ on the test data, placing us on the eight place team-wise. We also present our attempts at incorporating spatial correlations into the model; however, contrary to our expectations, adding this type of auxiliary information did not benefit the main system. Our code is available at https://github.com/danoneata/traffic4cast.
|
2401.04971
|
Shu Chen
|
Shu Chen, Zitao Xu, Weike Pan, Qiang Yang, Zhong Ming
|
A Survey on Cross-Domain Sequential Recommendation
|
Accepted to the IJCAI 2024 Survey Track
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-domain sequential recommendation (CDSR) shifts the modeling of user
preferences from flat to stereoscopic by integrating and learning interaction
information from multiple domains at different granularities (ranging from
inter-sequence to intra-sequence and from single-domain to cross-domain). In
this survey, we first define the CDSR problem using a four-dimensional tensor
and then analyze its multi-type input representations under multidirectional
dimensionality reductions. Following that, we provide a systematic overview
from both macro and micro views. From a macro view, we abstract the multi-level
fusion structures of various models across domains and discuss their bridges
for fusion. From a micro view, focusing on the existing models, we first
discuss the basic technologies and then explain the auxiliary learning
technologies. Finally, we exhibit the available public datasets and the
representative experimental results as well as provide some insights into
future directions for research in CDSR.
|
[
{
"created": "Wed, 10 Jan 2024 07:31:26 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2024 07:52:57 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Apr 2024 12:34:19 GMT",
"version": "v3"
},
{
"created": "Fri, 17 May 2024 11:24:01 GMT",
"version": "v4"
}
] |
2024-05-20
|
[
[
"Chen",
"Shu",
""
],
[
"Xu",
"Zitao",
""
],
[
"Pan",
"Weike",
""
],
[
"Yang",
"Qiang",
""
],
[
"Ming",
"Zhong",
""
]
] |
Cross-domain sequential recommendation (CDSR) shifts the modeling of user preferences from flat to stereoscopic by integrating and learning interaction information from multiple domains at different granularities (ranging from inter-sequence to intra-sequence and from single-domain to cross-domain). In this survey, we first define the CDSR problem using a four-dimensional tensor and then analyze its multi-type input representations under multidirectional dimensionality reductions. Following that, we provide a systematic overview from both macro and micro views. From a macro view, we abstract the multi-level fusion structures of various models across domains and discuss their bridges for fusion. From a micro view, focusing on the existing models, we first discuss the basic technologies and then explain the auxiliary learning technologies. Finally, we exhibit the available public datasets and the representative experimental results as well as provide some insights into future directions for research in CDSR.
|
2001.11482
|
Zhuo Chen
|
Zhuo Chen, Takuya Yoshioka, Liang Lu, Tianyan Zhou, Zhong Meng, Yi
Luo, Jian Wu, Xiong Xiao, Jinyu Li
|
Continuous speech separation: dataset and analysis
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a dataset and protocols for evaluating continuous speech
separation algorithms. Most prior studies on speech separation use
pre-segmented signals of artificially mixed speech utterances which are mostly
\emph{fully} overlapped, and the algorithms are evaluated based on
signal-to-distortion ratio or similar performance metrics. However, in natural
conversations, a speech signal is continuous, containing both overlapped and
overlap-free components. In addition, the signal-based metrics have very weak
correlations with automatic speech recognition (ASR) accuracy. We think that
not only does this make it hard to assess the practical relevance of the tested
algorithms, it also hinders researchers from developing systems that can be
readily applied to real scenarios. In this paper, we define continuous speech
separation (CSS) as a task of generating a set of non-overlapped speech signals
from a \textit{continuous} audio stream that contains multiple utterances that
are \emph{partially} overlapped by a varying degree. A new real recorded
dataset, called LibriCSS, is derived from LibriSpeech by concatenating the
corpus utterances to simulate a conversation and capturing the audio replays
with far-field microphones. A Kaldi-based ASR evaluation protocol is also
established by using a well-trained multi-conditional acoustic model. By using
this dataset, several aspects of a recently proposed speaker-independent CSS
algorithm are investigated. The dataset and evaluation scripts are available to
facilitate the research in this direction.
|
[
{
"created": "Thu, 30 Jan 2020 18:01:31 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Apr 2020 03:25:41 GMT",
"version": "v2"
},
{
"created": "Thu, 7 May 2020 09:13:27 GMT",
"version": "v3"
}
] |
2020-05-08
|
[
[
"Chen",
"Zhuo",
""
],
[
"Yoshioka",
"Takuya",
""
],
[
"Lu",
"Liang",
""
],
[
"Zhou",
"Tianyan",
""
],
[
"Meng",
"Zhong",
""
],
[
"Luo",
"Yi",
""
],
[
"Wu",
"Jian",
""
],
[
"Xiao",
"Xiong",
""
],
[
"Li",
"Jinyu",
""
]
] |
This paper describes a dataset and protocols for evaluating continuous speech separation algorithms. Most prior studies on speech separation use pre-segmented signals of artificially mixed speech utterances which are mostly \emph{fully} overlapped, and the algorithms are evaluated based on signal-to-distortion ratio or similar performance metrics. However, in natural conversations, a speech signal is continuous, containing both overlapped and overlap-free components. In addition, the signal-based metrics have very weak correlations with automatic speech recognition (ASR) accuracy. We think that not only does this make it hard to assess the practical relevance of the tested algorithms, it also hinders researchers from developing systems that can be readily applied to real scenarios. In this paper, we define continuous speech separation (CSS) as a task of generating a set of non-overlapped speech signals from a \textit{continuous} audio stream that contains multiple utterances that are \emph{partially} overlapped by a varying degree. A new real recorded dataset, called LibriCSS, is derived from LibriSpeech by concatenating the corpus utterances to simulate a conversation and capturing the audio replays with far-field microphones. A Kaldi-based ASR evaluation protocol is also established by using a well-trained multi-conditional acoustic model. By using this dataset, several aspects of a recently proposed speaker-independent CSS algorithm are investigated. The dataset and evaluation scripts are available to facilitate the research in this direction.
|
2202.07706
|
Kehan Wang
|
Kehan Wang, David Chan, Seth Z. Zhao, John Canny, Avideh Zakhor
|
Misinformation Detection in Social Media Video Posts
|
We discovered an error in our dataset construction where retweets
were not properly filtered. This resulted in test data leakage in training
data, and the results reported are affected
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the growing adoption of short-form video by social media platforms,
reducing the spread of misinformation through video posts has become a critical
challenge for social media providers. In this paper, we develop methods to
detect misinformation in social media posts, exploiting modalities such as
video and text. Due to the lack of large-scale public data for misinformation
detection in multi-modal datasets, we collect 160,000 video posts from Twitter,
and leverage self-supervised learning to learn expressive representations of
joint visual and textual data. In this work, we propose two new methods for
detecting semantic inconsistencies within short-form social media video posts,
based on contrastive learning and masked language modeling. We demonstrate that
our new approaches outperform current state-of-the-art methods on both
artificial data generated by random-swapping of positive samples and in the
wild on a new manually-labeled test set for semantic misinformation.
|
[
{
"created": "Tue, 15 Feb 2022 20:14:54 GMT",
"version": "v1"
},
{
"created": "Sun, 31 Jul 2022 00:50:37 GMT",
"version": "v2"
}
] |
2022-08-02
|
[
[
"Wang",
"Kehan",
""
],
[
"Chan",
"David",
""
],
[
"Zhao",
"Seth Z.",
""
],
[
"Canny",
"John",
""
],
[
"Zakhor",
"Avideh",
""
]
] |
With the growing adoption of short-form video by social media platforms, reducing the spread of misinformation through video posts has become a critical challenge for social media providers. In this paper, we develop methods to detect misinformation in social media posts, exploiting modalities such as video and text. Due to the lack of large-scale public data for misinformation detection in multi-modal datasets, we collect 160,000 video posts from Twitter, and leverage self-supervised learning to learn expressive representations of joint visual and textual data. In this work, we propose two new methods for detecting semantic inconsistencies within short-form social media video posts, based on contrastive learning and masked language modeling. We demonstrate that our new approaches outperform current state-of-the-art methods on both artificial data generated by random-swapping of positive samples and in the wild on a new manually-labeled test set for semantic misinformation.
|
2404.12013
|
Semih Yagcioglu
|
Semih Yagcioglu, Osman Batur \.Ince, Aykut Erdem, Erkut Erdem, Desmond
Elliott, Deniz Yuret
|
Sequential Compositional Generalization in Multimodal Models
|
Accepted to the main conference of NAACL (2024) as a long paper
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of large-scale multimodal models has paved the pathway for
groundbreaking advances in generative modeling and reasoning, unlocking
transformative applications in a variety of complex tasks. However, a pressing
question that remains is their genuine capability for stronger forms of
generalization, which has been largely underexplored in the multimodal setting.
Our study aims to address this by examining sequential compositional
generalization using \textsc{CompAct} (\underline{Comp}ositional
\underline{Act}ivities)\footnote{Project Page:
\url{http://cyberiada.github.io/CompAct}}, a carefully constructed,
perceptually grounded dataset set within a rich backdrop of egocentric kitchen
activity videos. Each instance in our dataset is represented with a combination
of raw video footage, naturally occurring sound, and crowd-sourced step-by-step
descriptions. More importantly, our setup ensures that the individual concepts
are consistently distributed across training and evaluation sets, while their
compositions are novel in the evaluation set. We conduct a comprehensive
assessment of several unimodal and multimodal models. Our findings reveal that
bi-modal and tri-modal models exhibit a clear edge over their text-only
counterparts. This highlights the importance of multimodality while charting a
trajectory for future research in this domain.
|
[
{
"created": "Thu, 18 Apr 2024 09:04:15 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Yagcioglu",
"Semih",
""
],
[
"İnce",
"Osman Batur",
""
],
[
"Erdem",
"Aykut",
""
],
[
"Erdem",
"Erkut",
""
],
[
"Elliott",
"Desmond",
""
],
[
"Yuret",
"Deniz",
""
]
] |
The rise of large-scale multimodal models has paved the pathway for groundbreaking advances in generative modeling and reasoning, unlocking transformative applications in a variety of complex tasks. However, a pressing question that remains is their genuine capability for stronger forms of generalization, which has been largely underexplored in the multimodal setting. Our study aims to address this by examining sequential compositional generalization using \textsc{CompAct} (\underline{Comp}ositional \underline{Act}ivities)\footnote{Project Page: \url{http://cyberiada.github.io/CompAct}}, a carefully constructed, perceptually grounded dataset set within a rich backdrop of egocentric kitchen activity videos. Each instance in our dataset is represented with a combination of raw video footage, naturally occurring sound, and crowd-sourced step-by-step descriptions. More importantly, our setup ensures that the individual concepts are consistently distributed across training and evaluation sets, while their compositions are novel in the evaluation set. We conduct a comprehensive assessment of several unimodal and multimodal models. Our findings reveal that bi-modal and tri-modal models exhibit a clear edge over their text-only counterparts. This highlights the importance of multimodality while charting a trajectory for future research in this domain.
|
1710.02546
|
Xuemei Xie
|
Xuemei Xie, Chenye Wang, Shu Chen, Guangming Shi, Zhifu Zhao
|
Real-Time Illegal Parking Detection System Based on Deep Learning
|
5pages,6figures
| null |
10.1145/3094243.3094261
| null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing illegal parking has become more and more serious. Nowadays the
methods of detecting illegally parked vehicles are based on background
segmentation. However, this method is weakly robust and sensitive to
environment. Benefitting from deep learning, this paper proposes a novel
illegal vehicle parking detection system. Illegal vehicles captured by camera
are firstly located and classified by the famous Single Shot MultiBox Detector
(SSD) algorithm. To improve the performance, we propose to optimize SSD by
adjusting the aspect ratio of default box to accommodate with our dataset
better. After that, a tracking and analysis of movement is adopted to judge the
illegal vehicles in the region of interest (ROI). Experiments show that the
system can achieve a 99% accuracy and real-time (25FPS) detection with strong
robustness in complex environments.
|
[
{
"created": "Thu, 5 Oct 2017 07:57:29 GMT",
"version": "v1"
}
] |
2017-10-10
|
[
[
"Xie",
"Xuemei",
""
],
[
"Wang",
"Chenye",
""
],
[
"Chen",
"Shu",
""
],
[
"Shi",
"Guangming",
""
],
[
"Zhao",
"Zhifu",
""
]
] |
The increasing illegal parking has become more and more serious. Nowadays the methods of detecting illegally parked vehicles are based on background segmentation. However, this method is weakly robust and sensitive to environment. Benefitting from deep learning, this paper proposes a novel illegal vehicle parking detection system. Illegal vehicles captured by camera are firstly located and classified by the famous Single Shot MultiBox Detector (SSD) algorithm. To improve the performance, we propose to optimize SSD by adjusting the aspect ratio of default box to accommodate with our dataset better. After that, a tracking and analysis of movement is adopted to judge the illegal vehicles in the region of interest (ROI). Experiments show that the system can achieve a 99% accuracy and real-time (25FPS) detection with strong robustness in complex environments.
|
cs/0501045
|
Kenneth Clarkson
|
Kenneth L. Clarkson and Kasturi Varadarajan
|
Improved Approximation Algorithms for Geometric Set Cover
| null | null | null | null |
cs.CG cs.DS
| null |
Given a collection S of subsets of some set U, and M a subset of U, the set
cover problem is to find the smallest subcollection C of S such that M is a
subset of the union of the sets in C. While the general problem is NP-hard to
solve, even approximately, here we consider some geometric special cases, where
usually U = R^d. Extending prior results, we show that approximation algorithms
with provable performance exist, under a certain general condition: that for a
random subset R of S and function f(), there is a decomposition of the portion
of U not covered by R into an expected f(|R|) regions, each region of a
particular simple form. We show that under this condition, a cover of size
O(f(|C|)) can be found. Our proof involves the generalization of shallow
cuttings to more general geometric situations. We obtain constant-factor
approximation algorithms for covering by unit cubes in R^3, for guarding a
one-dimensional terrain, and for covering by similar-sized fat triangles in
R^2. We also obtain improved approximation guarantees for fat triangles, of
arbitrary size, and for a class of fat objects.
|
[
{
"created": "Thu, 20 Jan 2005 21:31:22 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Clarkson",
"Kenneth L.",
""
],
[
"Varadarajan",
"Kasturi",
""
]
] |
Given a collection S of subsets of some set U, and M a subset of U, the set cover problem is to find the smallest subcollection C of S such that M is a subset of the union of the sets in C. While the general problem is NP-hard to solve, even approximately, here we consider some geometric special cases, where usually U = R^d. Extending prior results, we show that approximation algorithms with provable performance exist, under a certain general condition: that for a random subset R of S and function f(), there is a decomposition of the portion of U not covered by R into an expected f(|R|) regions, each region of a particular simple form. We show that under this condition, a cover of size O(f(|C|)) can be found. Our proof involves the generalization of shallow cuttings to more general geometric situations. We obtain constant-factor approximation algorithms for covering by unit cubes in R^3, for guarding a one-dimensional terrain, and for covering by similar-sized fat triangles in R^2. We also obtain improved approximation guarantees for fat triangles, of arbitrary size, and for a class of fat objects.
|
2004.14547
|
Xiaoteng Ma
|
Xiaoteng Ma, Li Xia, Zhengyuan Zhou, Jun Yang, Qianchuan Zhao
|
DSAC: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement
Learning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a new reinforcement learning (RL) algorithm called
Distributional Soft Actor Critic (DSAC), which exploits the distributional
information of accumulated rewards to achieve better performance. Seamlessly
integrating SAC (which uses entropy to encourage exploration) with a principled
distributional view of the underlying objective, DSAC takes into consideration
the randomness in both action and rewards, and beats the state-of-the-art
baselines in several continuous control benchmarks. Moreover, with the
distributional information of rewards, we propose a unified framework for
risk-sensitive learning, one that goes beyond maximizing only expected
accumulated rewards. Under this framework we discuss three specific
risk-related metrics: percentile, mean-variance and distorted expectation. Our
extensive experiments demonstrate that with distribution modeling in RL, the
agent performs better for both risk-averse and risk-seeking control tasks.
|
[
{
"created": "Thu, 30 Apr 2020 02:23:15 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jun 2020 02:08:35 GMT",
"version": "v2"
}
] |
2020-06-12
|
[
[
"Ma",
"Xiaoteng",
""
],
[
"Xia",
"Li",
""
],
[
"Zhou",
"Zhengyuan",
""
],
[
"Yang",
"Jun",
""
],
[
"Zhao",
"Qianchuan",
""
]
] |
In this paper, we present a new reinforcement learning (RL) algorithm called Distributional Soft Actor Critic (DSAC), which exploits the distributional information of accumulated rewards to achieve better performance. Seamlessly integrating SAC (which uses entropy to encourage exploration) with a principled distributional view of the underlying objective, DSAC takes into consideration the randomness in both action and rewards, and beats the state-of-the-art baselines in several continuous control benchmarks. Moreover, with the distributional information of rewards, we propose a unified framework for risk-sensitive learning, one that goes beyond maximizing only expected accumulated rewards. Under this framework we discuss three specific risk-related metrics: percentile, mean-variance and distorted expectation. Our extensive experiments demonstrate that with distribution modeling in RL, the agent performs better for both risk-averse and risk-seeking control tasks.
|
0902.2866
|
Alain Barrat
|
Ciro Cattuto, Alain Barrat (CPT), Andrea Baldassarri, G. Schehr (LPT),
Vittorio Loreto
|
Collective dynamics of social annotation
| null |
Proceeding of the national academy of sciences 106 (2009) 10511
|
10.1073/pnas.0901136106
| null |
cs.CY cond-mat.stat-mech physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The enormous increase of popularity and use of the WWW has led in the recent
years to important changes in the ways people communicate. An interesting
example of this fact is provided by the now very popular social annotation
systems, through which users annotate resources (such as web pages or digital
photographs) with text keywords dubbed tags. Understanding the rich emerging
structures resulting from the uncoordinated actions of users calls for an
interdisciplinary effort. In particular concepts borrowed from statistical
physics, such as random walks, and the complex networks framework, can
effectively contribute to the mathematical modeling of social annotation
systems. Here we show that the process of social annotation can be seen as a
collective but uncoordinated exploration of an underlying semantic space,
pictured as a graph, through a series of random walks. This modeling framework
reproduces several aspects, so far unexplained, of social annotation, among
which the peculiar growth of the size of the vocabulary used by the community
and its complex network structure that represents an externalization of
semantic structures grounded in cognition and typically hard to access.
|
[
{
"created": "Tue, 17 Feb 2009 08:58:54 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Apr 2009 12:09:47 GMT",
"version": "v2"
}
] |
2009-07-02
|
[
[
"Cattuto",
"Ciro",
"",
"CPT"
],
[
"Barrat",
"Alain",
"",
"CPT"
],
[
"Baldassarri",
"Andrea",
"",
"LPT"
],
[
"Schehr",
"G.",
"",
"LPT"
],
[
"Loreto",
"Vittorio",
""
]
] |
The enormous increase of popularity and use of the WWW has led in the recent years to important changes in the ways people communicate. An interesting example of this fact is provided by the now very popular social annotation systems, through which users annotate resources (such as web pages or digital photographs) with text keywords dubbed tags. Understanding the rich emerging structures resulting from the uncoordinated actions of users calls for an interdisciplinary effort. In particular concepts borrowed from statistical physics, such as random walks, and the complex networks framework, can effectively contribute to the mathematical modeling of social annotation systems. Here we show that the process of social annotation can be seen as a collective but uncoordinated exploration of an underlying semantic space, pictured as a graph, through a series of random walks. This modeling framework reproduces several aspects, so far unexplained, of social annotation, among which the peculiar growth of the size of the vocabulary used by the community and its complex network structure that represents an externalization of semantic structures grounded in cognition and typically hard to access.
|
2207.01193
|
Huimin Chen
|
Huimin Chen, Fengran Mo, Yanhao Wang, Cen Chen, Jian-Yun Nie, Chengyu
Wang and Jamie Cui
|
A Customized Text Sanitization Mechanism with Differential Privacy
|
This work has been accepted to the Findings of ACL 2023
|
https://aclanthology.org/2023.findings-acl.355/
|
10.18653/v1/2023.findings-acl.355
| null |
cs.CR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As privacy issues are receiving increasing attention within the Natural
Language Processing (NLP) community, numerous methods have been proposed to
sanitize texts subject to differential privacy. However, the state-of-the-art
text sanitization mechanisms based on metric local differential privacy (MLDP)
do not apply to non-metric semantic similarity measures and cannot achieve good
trade-offs between privacy and utility. To address the above limitations, we
propose a novel Customized Text (CusText) sanitization mechanism based on the
original $\epsilon$-differential privacy (DP) definition, which is compatible
with any similarity measure. Furthermore, CusText assigns each input token a
customized output set of tokens to provide more advanced privacy protection at
the token level. Extensive experiments on several benchmark datasets show that
CusText achieves a better trade-off between privacy and utility than existing
mechanisms. The code is available at https://github.com/sai4july/CusText.
|
[
{
"created": "Mon, 4 Jul 2022 04:37:42 GMT",
"version": "v1"
},
{
"created": "Tue, 23 May 2023 04:52:19 GMT",
"version": "v2"
}
] |
2023-09-04
|
[
[
"Chen",
"Huimin",
""
],
[
"Mo",
"Fengran",
""
],
[
"Wang",
"Yanhao",
""
],
[
"Chen",
"Cen",
""
],
[
"Nie",
"Jian-Yun",
""
],
[
"Wang",
"Chengyu",
""
],
[
"Cui",
"Jamie",
""
]
] |
As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-the-art text sanitization mechanisms based on metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good trade-offs between privacy and utility. To address the above limitations, we propose a novel Customized Text (CusText) sanitization mechanism based on the original $\epsilon$-differential privacy (DP) definition, which is compatible with any similarity measure. Furthermore, CusText assigns each input token a customized output set of tokens to provide more advanced privacy protection at the token level. Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms. The code is available at https://github.com/sai4july/CusText.
|
1202.2089
|
Jiaming Xu
|
Jiaming Xu and Bruce Hajek
|
The Supermarket Game
|
Submitted to Stochastic Systems
| null | null | null |
cs.IT cs.GT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A supermarket game is considered with $N$ FCFS queues with unit exponential
service rate and global Poisson arrival rate $N \lambda$. Upon arrival each
customer chooses a number of queues to be sampled uniformly at random and joins
the least loaded sampled queue. Customers are assumed to have cost for both
waiting and sampling, and they want to minimize their own expected total cost.
We study the supermarket game in a mean field model that corresponds to the
limit as $N$ converges to infinity in the sense that (i) for a fixed symmetric
customer strategy, the joint equilibrium distribution of any fixed number of
queues converges as $N \to \infty$ to a product distribution determined by the
mean field model and (ii) a Nash equilibrium for the mean field model is an
$\epsilon$-Nash equilibrium for the finite $N$ model with $N$ sufficiently
large. It is shown that there always exists a Nash equilibrium for $\lambda <1$
and the Nash equilibrium is unique with homogeneous waiting cost for $\lambda^2
\le 1/2$. Furthermore, we find that the action of sampling more queues by some
customers has a positive externality on the other customers in the mean field
model, but can have a negative externality for finite $N$.
|
[
{
"created": "Thu, 9 Feb 2012 19:42:28 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Dec 2012 15:29:24 GMT",
"version": "v2"
}
] |
2012-12-27
|
[
[
"Xu",
"Jiaming",
""
],
[
"Hajek",
"Bruce",
""
]
] |
A supermarket game is considered with $N$ FCFS queues with unit exponential service rate and global Poisson arrival rate $N \lambda$. Upon arrival each customer chooses a number of queues to be sampled uniformly at random and joins the least loaded sampled queue. Customers are assumed to have cost for both waiting and sampling, and they want to minimize their own expected total cost. We study the supermarket game in a mean field model that corresponds to the limit as $N$ converges to infinity in the sense that (i) for a fixed symmetric customer strategy, the joint equilibrium distribution of any fixed number of queues converges as $N \to \infty$ to a product distribution determined by the mean field model and (ii) a Nash equilibrium for the mean field model is an $\epsilon$-Nash equilibrium for the finite $N$ model with $N$ sufficiently large. It is shown that there always exists a Nash equilibrium for $\lambda <1$ and the Nash equilibrium is unique with homogeneous waiting cost for $\lambda^2 \le 1/2$. Furthermore, we find that the action of sampling more queues by some customers has a positive externality on the other customers in the mean field model, but can have a negative externality for finite $N$.
|
2205.01440
|
Daniel Graziotin
|
Verena Ebert, Daniel Graziotin, Stefan Wagner
|
How Are Communication Channels on GitHub Presented to Their Intended
Audience? -- A Thematic Analysis
|
10 pages, 5 figures. Accepted for presentation at the International
Conference on Evaluation and Assessment in Software Engineering (EASE) 2022
|
In Proceedings of the 26th International Conference on Evaluation
and Assessment in Software Engineering (EASE 2022). Association for Computing
Machinery, New York, NY, USA, 40-49
|
10.1145/3530019.3530024
| null |
cs.SE cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Communication is essential in software development, and even more in
distributed settings. Communication activities need to be organized and
coordinated to defend against the threat of productivity losses, increases in
cognitive load, and stress among team members. With a plethora of communication
channels that were identified by previous research in open-source projects,
there is a need to explore organizational issues in how these communication
channels are introduced, explained, and motivated for use among all project
members. In this study, we wanted to understand which communication channels
are used in GitHub projects and how they are presented to the GitHub project
audience. We employed thematic analysis to analyze 151 artifacts in 90 GitHub
projects. Our results revealed 32 unique communications channels that can be
divided into nine different types. Projects mostly provide channels of
different types, but for some types (e.g., chat) it is common to provide
several channels. Maintainers are aware that channels have different properties
and help the developers to decide which channel should be used in which case.
However, this is not true for all projects, and often we have not found any
explicit reasons why maintainers chose to provide one channel over another.
Different channels can be used for different purposes and have different
affordances, so maintainers have to decide wisely which channels they want to
provide and make clear which channel should be used in which case. Otherwise,
developers might feel overwhelmed of too many channels and information can get
fragmented over multiple channels.
|
[
{
"created": "Tue, 3 May 2022 11:57:53 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"Ebert",
"Verena",
""
],
[
"Graziotin",
"Daniel",
""
],
[
"Wagner",
"Stefan",
""
]
] |
Communication is essential in software development, and even more in distributed settings. Communication activities need to be organized and coordinated to defend against the threat of productivity losses, increases in cognitive load, and stress among team members. With a plethora of communication channels that were identified by previous research in open-source projects, there is a need to explore organizational issues in how these communication channels are introduced, explained, and motivated for use among all project members. In this study, we wanted to understand which communication channels are used in GitHub projects and how they are presented to the GitHub project audience. We employed thematic analysis to analyze 151 artifacts in 90 GitHub projects. Our results revealed 32 unique communications channels that can be divided into nine different types. Projects mostly provide channels of different types, but for some types (e.g., chat) it is common to provide several channels. Maintainers are aware that channels have different properties and help the developers to decide which channel should be used in which case. However, this is not true for all projects, and often we have not found any explicit reasons why maintainers chose to provide one channel over another. Different channels can be used for different purposes and have different affordances, so maintainers have to decide wisely which channels they want to provide and make clear which channel should be used in which case. Otherwise, developers might feel overwhelmed of too many channels and information can get fragmented over multiple channels.
|
1810.09431
|
Christopher Shulby
|
Christopher Dane Shulby, Leonardo Pombal, Vitor Jord\~ao, Guilherme
Ziolle, Bruno Martho, Ant\^onio Postal, Thiago Prochnow
|
Proactive Security: Embedded AI Solution for Violent and Abusive Speech
Recognition
|
6 Pages, Bracis 2018 Preprint
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Violence is an epidemic in Brazil and a problem on the rise world-wide.
Mobile devices provide communication technologies which can be used to monitor
and alert about violent situations. However, current solutions, like panic
buttons or safe words, might increase the loss of life in violent situations.
We propose an embedded artificial intelligence solution, using natural language
and speech processing technology, to silently alert someone who can help in
this situation. The corpus used contains 400 positive phrases and 800 negative
phrases, totaling 1,200 sentences which are classified using two well-known
extraction methods for natural language processing tasks: bag-of-words and word
embeddings and classified with a support vector machine. We describe the
proof-of-concept product in development with promising results, indicating a
path towards a commercial product. More importantly we show that model
improvements via word embeddings and data augmentation techniques provide an
intrinsically robust model. The final embedded solution also has a small
footprint of less than 10 MB.
|
[
{
"created": "Mon, 22 Oct 2018 17:56:08 GMT",
"version": "v1"
}
] |
2018-10-23
|
[
[
"Shulby",
"Christopher Dane",
""
],
[
"Pombal",
"Leonardo",
""
],
[
"Jordão",
"Vitor",
""
],
[
"Ziolle",
"Guilherme",
""
],
[
"Martho",
"Bruno",
""
],
[
"Postal",
"Antônio",
""
],
[
"Prochnow",
"Thiago",
""
]
] |
Violence is an epidemic in Brazil and a problem on the rise world-wide. Mobile devices provide communication technologies which can be used to monitor and alert about violent situations. However, current solutions, like panic buttons or safe words, might increase the loss of life in violent situations. We propose an embedded artificial intelligence solution, using natural language and speech processing technology, to silently alert someone who can help in this situation. The corpus used contains 400 positive phrases and 800 negative phrases, totaling 1,200 sentences which are classified using two well-known extraction methods for natural language processing tasks: bag-of-words and word embeddings and classified with a support vector machine. We describe the proof-of-concept product in development with promising results, indicating a path towards a commercial product. More importantly we show that model improvements via word embeddings and data augmentation techniques provide an intrinsically robust model. The final embedded solution also has a small footprint of less than 10 MB.
|
1703.09499
|
Yangyang Li
|
Yangyang Li and Ruqian Lu
|
Locality preserving projection on SPD matrix Lie group: algorithm and
analysis
|
15 pages, 3 tables
| null | null | null |
cs.CV cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Symmetric positive definite (SPD) matrices used as feature descriptors in
image recognition are usually high dimensional. Traditional manifold learning
is only applicable for reducing the dimension of high-dimensional vector-form
data. For high-dimensional SPD matrices, directly using manifold learning
algorithms to reduce the dimension of matrix-form data is impossible. The SPD
matrix must first be transformed into a long vector, and then the dimension of
this vector must be reduced. However, this approach breaks the spatial
structure of the SPD matrix space. To overcome this limitation, we propose a
new dimension reduction algorithm on SPD matrix space to transform
high-dimensional SPD matrices into low-dimensional SPD matrices. Our work is
based on the fact that the set of all SPD matrices with the same size has a Lie
group structure, and we aim to transform the manifold learning to the SPD
matrix Lie group. We use the basic idea of the manifold learning algorithm
called locality preserving projection (LPP) to construct the corresponding
Laplacian matrix on the SPD matrix Lie group. Thus, we call our approach
Lie-LPP to emphasize its Lie group character. We present a detailed algorithm
analysis and show through experiments that Lie-LPP achieves effective results
on human action recognition and human face recognition.
|
[
{
"created": "Tue, 28 Mar 2017 10:38:22 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Nov 2017 02:47:32 GMT",
"version": "v2"
}
] |
2017-11-17
|
[
[
"Li",
"Yangyang",
""
],
[
"Lu",
"Ruqian",
""
]
] |
Symmetric positive definite (SPD) matrices used as feature descriptors in image recognition are usually high dimensional. Traditional manifold learning is only applicable for reducing the dimension of high-dimensional vector-form data. For high-dimensional SPD matrices, directly using manifold learning algorithms to reduce the dimension of matrix-form data is impossible. The SPD matrix must first be transformed into a long vector, and then the dimension of this vector must be reduced. However, this approach breaks the spatial structure of the SPD matrix space. To overcome this limitation, we propose a new dimension reduction algorithm on SPD matrix space to transform high-dimensional SPD matrices into low-dimensional SPD matrices. Our work is based on the fact that the set of all SPD matrices with the same size has a Lie group structure, and we aim to transform the manifold learning to the SPD matrix Lie group. We use the basic idea of the manifold learning algorithm called locality preserving projection (LPP) to construct the corresponding Laplacian matrix on the SPD matrix Lie group. Thus, we call our approach Lie-LPP to emphasize its Lie group character. We present a detailed algorithm analysis and show through experiments that Lie-LPP achieves effective results on human action recognition and human face recognition.
|
2209.10868
|
Chengran Yang
|
Yang Chengran, Bowen Xu, Ferdian Thung, Yucen Shi, Ting Zhang, Zhou
Yang, Xin Zhou, Jieke Shi, Junda He, DongGyun Han, David Lo
|
Answer Summarization for Technical Queries: Benchmark and New Approach
|
Accepted by ASE 2022
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior studies have demonstrated that approaches to generate an answer summary
for a given technical query in Software Question and Answer (SQA) sites are
desired. We find that existing approaches are assessed solely through user
studies. There is a need for a benchmark with ground truth summaries to
complement assessment through user studies. Unfortunately, such a benchmark is
non-existent for answer summarization for technical queries from SQA sites. To
fill the gap, we manually construct a high-quality benchmark to enable
automatic evaluation of answer summarization for technical queries for SQA
sites. Using the benchmark, we comprehensively evaluate the performance of
existing approaches and find that there is still a big room for improvement.
Motivated by the results, we propose a new approach TechSumBot with three key
modules:1) Usefulness Ranking module, 2) Centrality Estimation module, and 3)
Redundancy Removal module. We evaluate TechSumBot in both automatic (i.e.,
using our benchmark) and manual (i.e., via a user study) manners. The results
from both evaluations consistently demonstrate that TechSumBot outperforms the
best performing baseline approaches from both SE and NLP domains by a large
margin, i.e., 10.83%-14.90%, 32.75%-36.59%, and 12.61%-17.54%, in terms of
ROUGE-1, ROUGE-2, and ROUGE-L on automatic evaluation, and 5.79%-9.23% and
17.03%-17.68%, in terms of average usefulness and diversity score on human
evaluation. This highlights that the automatic evaluation of our benchmark can
uncover findings similar to the ones found through user studies. More
importantly, automatic evaluation has a much lower cost, especially when it is
used to assess a new approach. Additionally, we also conducted an ablation
study, which demonstrates that each module in TechSumBot contributes to
boosting the overall performance of TechSumBot.
|
[
{
"created": "Thu, 22 Sep 2022 09:05:46 GMT",
"version": "v1"
}
] |
2022-09-23
|
[
[
"Chengran",
"Yang",
""
],
[
"Xu",
"Bowen",
""
],
[
"Thung",
"Ferdian",
""
],
[
"Shi",
"Yucen",
""
],
[
"Zhang",
"Ting",
""
],
[
"Yang",
"Zhou",
""
],
[
"Zhou",
"Xin",
""
],
[
"Shi",
"Jieke",
""
],
[
"He",
"Junda",
""
],
[
"Han",
"DongGyun",
""
],
[
"Lo",
"David",
""
]
] |
Prior studies have demonstrated that approaches to generate an answer summary for a given technical query in Software Question and Answer (SQA) sites are desired. We find that existing approaches are assessed solely through user studies. There is a need for a benchmark with ground truth summaries to complement assessment through user studies. Unfortunately, such a benchmark is non-existent for answer summarization for technical queries from SQA sites. To fill the gap, we manually construct a high-quality benchmark to enable automatic evaluation of answer summarization for technical queries for SQA sites. Using the benchmark, we comprehensively evaluate the performance of existing approaches and find that there is still a big room for improvement. Motivated by the results, we propose a new approach TechSumBot with three key modules:1) Usefulness Ranking module, 2) Centrality Estimation module, and 3) Redundancy Removal module. We evaluate TechSumBot in both automatic (i.e., using our benchmark) and manual (i.e., via a user study) manners. The results from both evaluations consistently demonstrate that TechSumBot outperforms the best performing baseline approaches from both SE and NLP domains by a large margin, i.e., 10.83%-14.90%, 32.75%-36.59%, and 12.61%-17.54%, in terms of ROUGE-1, ROUGE-2, and ROUGE-L on automatic evaluation, and 5.79%-9.23% and 17.03%-17.68%, in terms of average usefulness and diversity score on human evaluation. This highlights that the automatic evaluation of our benchmark can uncover findings similar to the ones found through user studies. More importantly, automatic evaluation has a much lower cost, especially when it is used to assess a new approach. Additionally, we also conducted an ablation study, which demonstrates that each module in TechSumBot contributes to boosting the overall performance of TechSumBot.
|
2310.09242
|
Nitinder Mohan Dr.
|
Nitinder Mohan, Andrew Ferguson, Hendrik Cech, Prakita Rayyan Renatin,
Rohan Bose, Mahesh Marina, J\"org Ott
|
A Multifaceted Look at Starlink Performance
|
Accepted in ACM Web Conference 2024 (WWW 24)
|
In Proceedings of ACM Web Conference 2024 (WWW 24)
|
10.1145/3589334.3645328
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Low-Earth Orbit (LEO) mega-constellations have emerged as a
promising network technology and have ushered in a new era for democratizing
Internet access. The Starlink network from SpaceX stands out as the only
consumer-facing LEO network with over 2M+ customers and more than 4000
operational satellites. In this paper, we conduct the first-of-its-kind
extensive multi-faceted analysis of Starlink network performance leveraging
several measurement sources. First, based on 19.2M crowdsourced M-Lab speed
test measurements from 34 countries since 2021, we analyze Starlink global
performance relative to terrestrial cellular networks. Second, we examine
Starlink's ability to support real-time web-based latency and
bandwidth-critical applications by analyzing the performance of (i) Zoom video
conferencing, and (ii) Luna cloud gaming, comparing it to 5G and terrestrial
fiber. Third, we orchestrate targeted measurements from Starlink-enabled RIPE
Atlas probes to shed light on the last-mile Starlink access and other factors
affecting its performance globally. Finally, we conduct controlled experiments
from Starlink dishes in two countries and analyze the impact of globally
synchronized "15-second reconfiguration intervals" of the links that cause
substantial latency and throughput variations. Our unique analysis provides
revealing insights on global Starlink functionality and paints the most
comprehensive picture of the LEO network's operation to date.
|
[
{
"created": "Fri, 13 Oct 2023 16:47:26 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 15:47:07 GMT",
"version": "v2"
}
] |
2024-02-23
|
[
[
"Mohan",
"Nitinder",
""
],
[
"Ferguson",
"Andrew",
""
],
[
"Cech",
"Hendrik",
""
],
[
"Renatin",
"Prakita Rayyan",
""
],
[
"Bose",
"Rohan",
""
],
[
"Marina",
"Mahesh",
""
],
[
"Ott",
"Jörg",
""
]
] |
In recent years, Low-Earth Orbit (LEO) mega-constellations have emerged as a promising network technology and have ushered in a new era for democratizing Internet access. The Starlink network from SpaceX stands out as the only consumer-facing LEO network with over 2M+ customers and more than 4000 operational satellites. In this paper, we conduct the first-of-its-kind extensive multi-faceted analysis of Starlink network performance leveraging several measurement sources. First, based on 19.2M crowdsourced M-Lab speed test measurements from 34 countries since 2021, we analyze Starlink global performance relative to terrestrial cellular networks. Second, we examine Starlink's ability to support real-time web-based latency and bandwidth-critical applications by analyzing the performance of (i) Zoom video conferencing, and (ii) Luna cloud gaming, comparing it to 5G and terrestrial fiber. Third, we orchestrate targeted measurements from Starlink-enabled RIPE Atlas probes to shed light on the last-mile Starlink access and other factors affecting its performance globally. Finally, we conduct controlled experiments from Starlink dishes in two countries and analyze the impact of globally synchronized "15-second reconfiguration intervals" of the links that cause substantial latency and throughput variations. Our unique analysis provides revealing insights on global Starlink functionality and paints the most comprehensive picture of the LEO network's operation to date.
|
1909.13794
|
YunKai Wang
|
Yunkai Wang, Shenhan Jia, Zexi Chen, Zheyuan Huang, Rong Xiong
|
Multi-agent Collaboration for Feasible Collaborative Behavior
Construction and Evaluation
|
7 pages, IEEE ROBIO 2019
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the case of the two-person zero-sum stochastic game with a central
controller, this paper proposes a best collaborative behavior search and
selection algorithm based on reinforcement learning, in response to how to
choose the best collaborative object and action for the central controller. In
view of the existing multi-agent collaboration and confrontation reinforcement
learning methods, the methods of traversing all actions in a certain state
leads to the problem of long calculation time and unsafe policy exploration.
This paper proposes to construct a feasible collaborative behavior set by using
action space discretization, establishing models of both sides, model-based
prediction and parallel search. Then, we use the deep q-learning method in
reinforcement learning to train the scoring function to select the optimal
collaboration behavior from the feasible collaborative behavior set. This
method enables efficient and accurate calculation in an environment with strong
confrontation, high dynamics and a large number of agents, which is verified by
the RoboCup Small Size League robots passing collaboration.
|
[
{
"created": "Mon, 30 Sep 2019 15:43:49 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Wang",
"Yunkai",
""
],
[
"Jia",
"Shenhan",
""
],
[
"Chen",
"Zexi",
""
],
[
"Huang",
"Zheyuan",
""
],
[
"Xiong",
"Rong",
""
]
] |
In the case of the two-person zero-sum stochastic game with a central controller, this paper proposes a best collaborative behavior search and selection algorithm based on reinforcement learning, in response to how to choose the best collaborative object and action for the central controller. In view of the existing multi-agent collaboration and confrontation reinforcement learning methods, the methods of traversing all actions in a certain state leads to the problem of long calculation time and unsafe policy exploration. This paper proposes to construct a feasible collaborative behavior set by using action space discretization, establishing models of both sides, model-based prediction and parallel search. Then, we use the deep q-learning method in reinforcement learning to train the scoring function to select the optimal collaboration behavior from the feasible collaborative behavior set. This method enables efficient and accurate calculation in an environment with strong confrontation, high dynamics and a large number of agents, which is verified by the RoboCup Small Size League robots passing collaboration.
|
1312.0786
|
Yong Liu
|
Yiyi Liao, Yue Wang, Yong Liu
|
Image Representation Learning Using Graph Regularized Auto-Encoders
|
9pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of image representation for the tasks of unsupervised
learning and semi-supervised learning. In those learning tasks, the raw image
vectors may not provide enough representation for their intrinsic structures
due to their highly dense feature space. To overcome this problem, the raw
image vectors should be mapped to a proper representation space which can
capture the latent structure of the original data and represent the data
explicitly for further learning tasks such as clustering.
Inspired by the recent research works on deep neural network and
representation learning, in this paper, we introduce the multiple-layer
auto-encoder into image representation, we also apply the locally invariant
ideal to our image representation with auto-encoders and propose a novel
method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact
representation which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure.
Extensive experiments on image clustering show encouraging results of the
proposed algorithm in comparison to the state-of-the-art algorithms on
real-word cases.
|
[
{
"created": "Tue, 3 Dec 2013 11:59:57 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Feb 2014 11:13:57 GMT",
"version": "v2"
}
] |
2014-02-20
|
[
[
"Liao",
"Yiyi",
""
],
[
"Wang",
"Yue",
""
],
[
"Liu",
"Yong",
""
]
] |
We consider the problem of image representation for the tasks of unsupervised learning and semi-supervised learning. In those learning tasks, the raw image vectors may not provide enough representation for their intrinsic structures due to their highly dense feature space. To overcome this problem, the raw image vectors should be mapped to a proper representation space which can capture the latent structure of the original data and represent the data explicitly for further learning tasks such as clustering. Inspired by the recent research works on deep neural network and representation learning, in this paper, we introduce the multiple-layer auto-encoder into image representation, we also apply the locally invariant ideal to our image representation with auto-encoders and propose a novel method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact representation which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. Extensive experiments on image clustering show encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-word cases.
|
2008.11256
|
Gilbert Bernstein
|
Gilbert Bernstein and Michael Mara and Tzu-Mao Li and Dougal Maclaurin
and Jonathan Ragan-Kelley
|
Differentiating a Tensor Language
|
In-progress Draft; unsubmitted
| null | null | null |
cs.PL cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
How does one compile derivatives of tensor programs, such that the resulting
code is purely functional (hence easier to optimize and parallelize) and
provably efficient relative to the original program? We show that naively
differentiating tensor code---as done in popular systems like Tensorflow and
PyTorch---can cause asymptotic slowdowns in pathological cases, violating the
Cheap Gradients Principle. However, all existing automatic differentiation
methods that guarantee this principle (for variable size data) do so by relying
on += mutation through aliases/pointers---which complicates downstream
optimization. We provide the first purely functional, provably efficient,
adjoint/reverse-mode derivatives of array/tensor code by explicitly accounting
for sparsity. We do this by focusing on the indicator function from Iverson's
APL. We also introduce a new "Tensor SSA" normal form and a new derivation of
reverse-mode automatic differentiation based on the universal property of
inner-products.
|
[
{
"created": "Tue, 25 Aug 2020 20:30:05 GMT",
"version": "v1"
}
] |
2020-10-01
|
[
[
"Bernstein",
"Gilbert",
""
],
[
"Mara",
"Michael",
""
],
[
"Li",
"Tzu-Mao",
""
],
[
"Maclaurin",
"Dougal",
""
],
[
"Ragan-Kelley",
"Jonathan",
""
]
] |
How does one compile derivatives of tensor programs, such that the resulting code is purely functional (hence easier to optimize and parallelize) and provably efficient relative to the original program? We show that naively differentiating tensor code---as done in popular systems like Tensorflow and PyTorch---can cause asymptotic slowdowns in pathological cases, violating the Cheap Gradients Principle. However, all existing automatic differentiation methods that guarantee this principle (for variable size data) do so by relying on += mutation through aliases/pointers---which complicates downstream optimization. We provide the first purely functional, provably efficient, adjoint/reverse-mode derivatives of array/tensor code by explicitly accounting for sparsity. We do this by focusing on the indicator function from Iverson's APL. We also introduce a new "Tensor SSA" normal form and a new derivation of reverse-mode automatic differentiation based on the universal property of inner-products.
|
2008.10449
|
Feng Xia
|
Feng Xia, Qiuyuan Yang, Jie Li, Jiannong Cao, Li Liu, Ahmedin Mohammed
Ahmed
|
Data Dissemination Using Interest Tree in Socially Aware Networking
|
13 pages, 9 figures
|
Computer Networks, Vol. 91, November 2015, pp: 495-507
|
10.1016/j.comnet.2015.08.047
| null |
cs.SI cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Socially aware networking (SAN) exploits social characteristics of mobile
users to streamline data dissemination protocols in opportunistic environments.
Existing protocols in this area utilized various social features such as user
interests, social similarity, and community structure to improve the
performance of data dissemination. However, the interrelationship between user
interests and its impact on the efficiency of data dissemination has not been
explored sufficiently. In this paper, we analyze various kinds of relationships
between user interests and model them using a layer-based structure in order to
form social communities in SAN paradigm. We propose Int-Tree, an Interest-Tree
based scheme which uses the relationship between user interests to improve the
performance of data dissemination. The core of Int-Tree is the interest-tree, a
tree-based community structure that combines two social features, i.e. density
of a community and social tie, to support data dissemination. The simulation
results show that Int-Tree achieves higher delivery ratio, lower overhead, in
comparison to two benchmark protocols, PROPHET and Epidemic routing. In
addition, Int-Tree can perform with 1.36 hop counts in average, and tolerable
latency in terms of buffer size, time to live (TTL) and simulation duration.
Finally, Int-Tree keeps stable performance with various parameters.
|
[
{
"created": "Sun, 9 Aug 2020 03:45:52 GMT",
"version": "v1"
}
] |
2020-08-25
|
[
[
"Xia",
"Feng",
""
],
[
"Yang",
"Qiuyuan",
""
],
[
"Li",
"Jie",
""
],
[
"Cao",
"Jiannong",
""
],
[
"Liu",
"Li",
""
],
[
"Ahmed",
"Ahmedin Mohammed",
""
]
] |
Socially aware networking (SAN) exploits social characteristics of mobile users to streamline data dissemination protocols in opportunistic environments. Existing protocols in this area utilized various social features such as user interests, social similarity, and community structure to improve the performance of data dissemination. However, the interrelationship between user interests and its impact on the efficiency of data dissemination has not been explored sufficiently. In this paper, we analyze various kinds of relationships between user interests and model them using a layer-based structure in order to form social communities in SAN paradigm. We propose Int-Tree, an Interest-Tree based scheme which uses the relationship between user interests to improve the performance of data dissemination. The core of Int-Tree is the interest-tree, a tree-based community structure that combines two social features, i.e. density of a community and social tie, to support data dissemination. The simulation results show that Int-Tree achieves higher delivery ratio, lower overhead, in comparison to two benchmark protocols, PROPHET and Epidemic routing. In addition, Int-Tree can perform with 1.36 hop counts in average, and tolerable latency in terms of buffer size, time to live (TTL) and simulation duration. Finally, Int-Tree keeps stable performance with various parameters.
|
2308.04022
|
Quan Li
|
Longfei Chen, Qianyu Liu, Chenyang Zhang, Yangkun Huang, Zhenhui Peng,
Haipeng Zeng, Zhida Sun, Xiaojuan Ma, and Quan Li
|
Amplifying the Music Listening Experience through Song Comments on Music
Streaming Platforms
|
In the Proceedings of ChinaVis 2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Music streaming services are increasingly popular among younger generations
who seek social experiences through personal expression and sharing of
subjective feelings in comments. However, such emotional aspects are often
ignored by current platforms, which affects the listeners' ability to find
music that triggers specific personal feelings. To address this gap, this study
proposes a novel approach that leverages deep learning methods to capture
contextual keywords, sentiments, and induced mechanisms from song comments. The
study augments a current music app with two features, including the
presentation of tags that best represent song comments and a novel map metaphor
that reorganizes song comments based on chronological order, content, and
sentiment. The effectiveness of the proposed approach is validated through a
usage scenario and a user study that demonstrate its capability to improve the
user experience of exploring songs and browsing comments of interest. This
study contributes to the advancement of music streaming services by providing a
more personalized and emotionally rich music experience for younger
generations.
|
[
{
"created": "Tue, 8 Aug 2023 03:35:20 GMT",
"version": "v1"
}
] |
2023-08-09
|
[
[
"Chen",
"Longfei",
""
],
[
"Liu",
"Qianyu",
""
],
[
"Zhang",
"Chenyang",
""
],
[
"Huang",
"Yangkun",
""
],
[
"Peng",
"Zhenhui",
""
],
[
"Zeng",
"Haipeng",
""
],
[
"Sun",
"Zhida",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Li",
"Quan",
""
]
] |
Music streaming services are increasingly popular among younger generations who seek social experiences through personal expression and sharing of subjective feelings in comments. However, such emotional aspects are often ignored by current platforms, which affects the listeners' ability to find music that triggers specific personal feelings. To address this gap, this study proposes a novel approach that leverages deep learning methods to capture contextual keywords, sentiments, and induced mechanisms from song comments. The study augments a current music app with two features, including the presentation of tags that best represent song comments and a novel map metaphor that reorganizes song comments based on chronological order, content, and sentiment. The effectiveness of the proposed approach is validated through a usage scenario and a user study that demonstrate its capability to improve the user experience of exploring songs and browsing comments of interest. This study contributes to the advancement of music streaming services by providing a more personalized and emotionally rich music experience for younger generations.
|
2310.17163
|
Yingwen Wu
|
Yingwen Wu, Tao Li, Xinwen Cheng, Jie Yang, Xiaolin Huang
|
Low-Dimensional Gradient Helps Out-of-Distribution Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting out-of-distribution (OOD) samples is essential for ensuring the
reliability of deep neural networks (DNNs) in real-world scenarios. While
previous research has predominantly investigated the disparity between
in-distribution (ID) and OOD data through forward information analysis, the
discrepancy in parameter gradients during the backward process of DNNs has
received insufficient attention. Existing studies on gradient disparities
mainly focus on the utilization of gradient norms, neglecting the wealth of
information embedded in gradient directions. To bridge this gap, in this paper,
we conduct a comprehensive investigation into leveraging the entirety of
gradient information for OOD detection. The primary challenge arises from the
high dimensionality of gradients due to the large number of network parameters.
To solve this problem, we propose performing linear dimension reduction on the
gradient using a designated subspace that comprises principal components. This
innovative technique enables us to obtain a low-dimensional representation of
the gradient with minimal information loss. Subsequently, by integrating the
reduced gradient with various existing detection score functions, our approach
demonstrates superior performance across a wide range of detection tasks. For
instance, on the ImageNet benchmark with ResNet50 model, our method achieves an
average reduction of 11.15$\%$ in the false positive rate at 95$\%$ recall
(FPR95) compared to the current state-of-the-art approach. The code would be
released.
|
[
{
"created": "Thu, 26 Oct 2023 05:28:32 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 02:59:23 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Wu",
"Yingwen",
""
],
[
"Li",
"Tao",
""
],
[
"Cheng",
"Xinwen",
""
],
[
"Yang",
"Jie",
""
],
[
"Huang",
"Xiaolin",
""
]
] |
Detecting out-of-distribution (OOD) samples is essential for ensuring the reliability of deep neural networks (DNNs) in real-world scenarios. While previous research has predominantly investigated the disparity between in-distribution (ID) and OOD data through forward information analysis, the discrepancy in parameter gradients during the backward process of DNNs has received insufficient attention. Existing studies on gradient disparities mainly focus on the utilization of gradient norms, neglecting the wealth of information embedded in gradient directions. To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection. The primary challenge arises from the high dimensionality of gradients due to the large number of network parameters. To solve this problem, we propose performing linear dimension reduction on the gradient using a designated subspace that comprises principal components. This innovative technique enables us to obtain a low-dimensional representation of the gradient with minimal information loss. Subsequently, by integrating the reduced gradient with various existing detection score functions, our approach demonstrates superior performance across a wide range of detection tasks. For instance, on the ImageNet benchmark with ResNet50 model, our method achieves an average reduction of 11.15$\%$ in the false positive rate at 95$\%$ recall (FPR95) compared to the current state-of-the-art approach. The code would be released.
|
cs/0612062
|
Federico Calzolari
|
Federico Calzolari, Michele Mammini, Monica Monachini
|
Unifying Lexicons in view of a Phonological and Morphological Lexical DB
|
4 pages
| null | null | null |
cs.IR
| null |
The present work falls in the line of activities promoted by the European
Languguage Resource Association (ELRA) Production Committee (PCom) and raises
issues in methods, procedures and tools for the reusability, creation, and
management of Language Resources. A two-fold purpose lies behind this
experiment. The first aim is to investigate the feasibility, define methods and
procedures for combining two Italian lexical resources that have incompatible
formats and complementary information into a Unified Lexicon (UL). The adopted
strategy and the procedures appointed are described together with the driving
criterion of the merging task, where a balance between human and computational
efforts is pursued. The coverage of the UL has been maximized, by making use of
simple and fast matching procedures. The second aim is to exploit this newly
obtained resource for implementing the phonological and morphological layers of
the CLIPS lexical database. Implementing these new layers and linking them with
the already exisitng syntactic and semantic layers is not a trivial task. The
constraints imposed by the model, the impact at the architectural level and the
solution adopted in order to make the whole database `speak' efficiently are
presented. Advantages vs. disadvantages are discussed.
|
[
{
"created": "Mon, 11 Dec 2006 14:45:49 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Calzolari",
"Federico",
""
],
[
"Mammini",
"Michele",
""
],
[
"Monachini",
"Monica",
""
]
] |
The present work falls in the line of activities promoted by the European Languguage Resource Association (ELRA) Production Committee (PCom) and raises issues in methods, procedures and tools for the reusability, creation, and management of Language Resources. A two-fold purpose lies behind this experiment. The first aim is to investigate the feasibility, define methods and procedures for combining two Italian lexical resources that have incompatible formats and complementary information into a Unified Lexicon (UL). The adopted strategy and the procedures appointed are described together with the driving criterion of the merging task, where a balance between human and computational efforts is pursued. The coverage of the UL has been maximized, by making use of simple and fast matching procedures. The second aim is to exploit this newly obtained resource for implementing the phonological and morphological layers of the CLIPS lexical database. Implementing these new layers and linking them with the already exisitng syntactic and semantic layers is not a trivial task. The constraints imposed by the model, the impact at the architectural level and the solution adopted in order to make the whole database `speak' efficiently are presented. Advantages vs. disadvantages are discussed.
|
2004.10956
|
Xiaoyu Tao
|
Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei,
Yihong Gong
|
Few-Shot Class-Incremental Learning
|
Accepted by CVPR 2020 (oral)
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to incrementally learn new classes is crucial to the development
of real-world artificial intelligence systems. In this paper, we focus on a
challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few
labelled samples, without forgetting the previously learned ones. To address
this problem, we represent the knowledge using a neural gas (NG) network, which
can learn and preserve the topology of the feature manifold formed by different
classes. On this basis, we propose the TOpology-Preserving knowledge
InCrementer (TOPIC) framework. TOPIC mitigates the forgetting of the old
classes by stabilizing NG's topology and improves the representation learning
for few-shot new classes by growing and adapting NG to new training samples.
Comprehensive experimental results demonstrate that our proposed method
significantly outperforms other state-of-the-art class-incremental learning
methods on CIFAR100, miniImageNet, and CUB200 datasets.
|
[
{
"created": "Thu, 23 Apr 2020 03:38:33 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Apr 2020 02:12:32 GMT",
"version": "v2"
}
] |
2020-04-27
|
[
[
"Tao",
"Xiaoyu",
""
],
[
"Hong",
"Xiaopeng",
""
],
[
"Chang",
"Xinyuan",
""
],
[
"Dong",
"Songlin",
""
],
[
"Wei",
"Xing",
""
],
[
"Gong",
"Yihong",
""
]
] |
The ability to incrementally learn new classes is crucial to the development of real-world artificial intelligence systems. In this paper, we focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem. FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones. To address this problem, we represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes. On this basis, we propose the TOpology-Preserving knowledge InCrementer (TOPIC) framework. TOPIC mitigates the forgetting of the old classes by stabilizing NG's topology and improves the representation learning for few-shot new classes by growing and adapting NG to new training samples. Comprehensive experimental results demonstrate that our proposed method significantly outperforms other state-of-the-art class-incremental learning methods on CIFAR100, miniImageNet, and CUB200 datasets.
|
2302.00070
|
Ching-Yao Chuang
|
Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba,
Stefanie Jegelka
|
Debiasing Vision-Language Models via Biased Prompts
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning models have been shown to inherit biases from their training
datasets. This can be particularly problematic for vision-language foundation
models trained on uncurated datasets scraped from the internet. The biases can
be amplified and propagated to downstream applications like zero-shot
classifiers and text-to-image generative models. In this study, we propose a
general approach for debiasing vision-language foundation models by projecting
out biased directions in the text embedding. In particular, we show that
debiasing only the text embedding with a calibrated projection matrix suffices
to yield robust classifiers and fair generative models. The proposed
closed-form solution enables easy integration into large-scale pipelines, and
empirical results demonstrate that our approach effectively reduces social bias
and spurious correlation in both discriminative and generative vision-language
models without the need for additional data or training.
|
[
{
"created": "Tue, 31 Jan 2023 20:09:33 GMT",
"version": "v1"
},
{
"created": "Mon, 15 May 2023 07:51:14 GMT",
"version": "v2"
}
] |
2023-05-16
|
[
[
"Chuang",
"Ching-Yao",
""
],
[
"Jampani",
"Varun",
""
],
[
"Li",
"Yuanzhen",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Jegelka",
"Stefanie",
""
]
] |
Machine learning models have been shown to inherit biases from their training datasets. This can be particularly problematic for vision-language foundation models trained on uncurated datasets scraped from the internet. The biases can be amplified and propagated to downstream applications like zero-shot classifiers and text-to-image generative models. In this study, we propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding. In particular, we show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models. The proposed closed-form solution enables easy integration into large-scale pipelines, and empirical results demonstrate that our approach effectively reduces social bias and spurious correlation in both discriminative and generative vision-language models without the need for additional data or training.
|
2404.15157
|
Si Chen
|
Si Chen, Feiyang Kang, Ning Yu, Ruoxi Jia
|
FASTTRACK: Fast and Accurate Fact Tracing for LLMs
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Fact tracing seeks to identify specific training examples that serve as the
knowledge source for a given query. Existing approaches to fact tracing rely on
assessing the similarity between each training sample and the query along a
certain dimension, such as lexical similarity, gradient, or embedding space.
However, these methods fall short of effectively distinguishing between samples
that are merely relevant and those that actually provide supportive evidence
for the information sought by the query. This limitation often results in
suboptimal effectiveness. Moreover, these approaches necessitate the
examination of the similarity of individual training points for each query,
imposing significant computational demands and creating a substantial barrier
for practical applications. This paper introduces FASTTRACK, a novel approach
that harnesses the capabilities of Large Language Models (LLMs) to validate
supportive evidence for queries and at the same time clusters the training
database towards a reduced extent for LLMs to trace facts. Our experiments show
that FASTTRACK substantially outperforms existing methods in both accuracy and
efficiency, achieving more than 100\% improvement in F1 score over the
state-of-the-art methods while being X33 faster than \texttt{TracIn}.
|
[
{
"created": "Mon, 22 Apr 2024 00:07:55 GMT",
"version": "v1"
}
] |
2024-04-24
|
[
[
"Chen",
"Si",
""
],
[
"Kang",
"Feiyang",
""
],
[
"Yu",
"Ning",
""
],
[
"Jia",
"Ruoxi",
""
]
] |
Fact tracing seeks to identify specific training examples that serve as the knowledge source for a given query. Existing approaches to fact tracing rely on assessing the similarity between each training sample and the query along a certain dimension, such as lexical similarity, gradient, or embedding space. However, these methods fall short of effectively distinguishing between samples that are merely relevant and those that actually provide supportive evidence for the information sought by the query. This limitation often results in suboptimal effectiveness. Moreover, these approaches necessitate the examination of the similarity of individual training points for each query, imposing significant computational demands and creating a substantial barrier for practical applications. This paper introduces FASTTRACK, a novel approach that harnesses the capabilities of Large Language Models (LLMs) to validate supportive evidence for queries and at the same time clusters the training database towards a reduced extent for LLMs to trace facts. Our experiments show that FASTTRACK substantially outperforms existing methods in both accuracy and efficiency, achieving more than 100\% improvement in F1 score over the state-of-the-art methods while being X33 faster than \texttt{TracIn}.
|
2001.09977
|
Daniel de Freitas Adiwardana
|
Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah
Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng
Lu, Quoc V. Le
|
Towards a Human-like Open-Domain Chatbot
|
38 pages, 12 figures
| null | null | null |
cs.CL cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Meena, a multi-turn open-domain chatbot trained end-to-end on data
mined and filtered from public domain social media conversations. This 2.6B
parameter neural network is simply trained to minimize perplexity of the next
token. We also propose a human evaluation metric called Sensibleness and
Specificity Average (SSA), which captures key elements of a human-like
multi-turn conversation. Our experiments show strong correlation between
perplexity and SSA. The fact that the best perplexity end-to-end trained Meena
scores high on SSA (72% on multi-turn evaluation) suggests that a human-level
SSA of 86% is potentially within reach if we can better optimize perplexity.
Additionally, the full version of Meena (with a filtering mechanism and tuned
decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots
we evaluated.
|
[
{
"created": "Mon, 27 Jan 2020 18:53:15 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jan 2020 18:58:14 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Feb 2020 07:36:47 GMT",
"version": "v3"
}
] |
2020-02-28
|
[
[
"Adiwardana",
"Daniel",
""
],
[
"Luong",
"Minh-Thang",
""
],
[
"So",
"David R.",
""
],
[
"Hall",
"Jamie",
""
],
[
"Fiedel",
"Noah",
""
],
[
"Thoppilan",
"Romal",
""
],
[
"Yang",
"Zi",
""
],
[
"Kulshreshtha",
"Apoorv",
""
],
[
"Nemade",
"Gaurav",
""
],
[
"Lu",
"Yifeng",
""
],
[
"Le",
"Quoc V.",
""
]
] |
We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.
|
2402.04028
|
Erion \c{C}ano
|
Erion \c{C}ano, Dario Lamaj
|
AlbNews: A Corpus of Headlines for Topic Modeling in Albanian
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scarcity of available text corpora for low-resource languages like
Albanian is a serious hurdle for research in natural language processing tasks.
This paper introduces AlbNews, a collection of 600 topically labeled news
headlines and 2600 unlabeled ones in Albanian. The data can be freely used for
conducting topic modeling research. We report the initial classification scores
of some traditional machine learning classifiers trained with the AlbNews
samples. These results show that basic models outrun the ensemble learning ones
and can serve as a baseline for future experiments.
|
[
{
"created": "Tue, 6 Feb 2024 14:24:28 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Çano",
"Erion",
""
],
[
"Lamaj",
"Dario",
""
]
] |
The scarcity of available text corpora for low-resource languages like Albanian is a serious hurdle for research in natural language processing tasks. This paper introduces AlbNews, a collection of 600 topically labeled news headlines and 2600 unlabeled ones in Albanian. The data can be freely used for conducting topic modeling research. We report the initial classification scores of some traditional machine learning classifiers trained with the AlbNews samples. These results show that basic models outrun the ensemble learning ones and can serve as a baseline for future experiments.
|
1407.1386
|
Agi Kurucz
|
Christopher Hampson and Agi Kurucz
|
Undecidable propositional bimodal logics and one-variable first-order
linear temporal logics with counting
| null |
ACM TOCL vol. 16(3) (2015), 27:1-27:36
|
10.1145/2757285
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
First-order temporal logics are notorious for their bad computational
behaviour. It is known that even the two-variable monadic fragment is highly
undecidable over various linear timelines, and over branching time even
one-variable fragments might be undecidable. However, there have been several
attempts on finding well-behaved fragments of first-order temporal logics and
related temporal description logics, mostly either by restricting the available
quantifier patterns, or considering sub-Boolean languages. Here we analyse
seemingly `mild' extensions of decidable one-variable fragments with counting
capabilities, interpreted in models with constant, decreasing, and expanding
first-order domains. We show that over most classes of linear orders these
logics are (sometimes highly) undecidable, even without constant and function
symbols, and with the sole temporal operator `eventually'.
We establish connections with bimodal logics over 2D product structures
having linear and `difference' (inequality) component relations, and prove our
results in this bimodal setting. We show a general result saying that
satisfiability over many classes of bimodal models with commuting linear and
difference relations is undecidable. As a by-product, we also obtain new
examples of finitely axiomatisable but Kripke incomplete bimodal logics. Our
results generalise similar lower bounds on bimodal logics over products of two
linear relations, and our proof methods are quite different from the proofs of
these results. Unlike previous proofs that first `diagonally encode' an
infinite grid, and then use reductions of tiling or Turing machine problems,
here we make direct use of the grid-like structure of product frames and obtain
undecidability by reductions of counter (Minsky) machine problems.
|
[
{
"created": "Sat, 5 Jul 2014 10:45:41 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Feb 2015 10:55:40 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jun 2015 16:11:17 GMT",
"version": "v3"
}
] |
2015-08-17
|
[
[
"Hampson",
"Christopher",
""
],
[
"Kurucz",
"Agi",
""
]
] |
First-order temporal logics are notorious for their bad computational behaviour. It is known that even the two-variable monadic fragment is highly undecidable over various linear timelines, and over branching time even one-variable fragments might be undecidable. However, there have been several attempts on finding well-behaved fragments of first-order temporal logics and related temporal description logics, mostly either by restricting the available quantifier patterns, or considering sub-Boolean languages. Here we analyse seemingly `mild' extensions of decidable one-variable fragments with counting capabilities, interpreted in models with constant, decreasing, and expanding first-order domains. We show that over most classes of linear orders these logics are (sometimes highly) undecidable, even without constant and function symbols, and with the sole temporal operator `eventually'. We establish connections with bimodal logics over 2D product structures having linear and `difference' (inequality) component relations, and prove our results in this bimodal setting. We show a general result saying that satisfiability over many classes of bimodal models with commuting linear and difference relations is undecidable. As a by-product, we also obtain new examples of finitely axiomatisable but Kripke incomplete bimodal logics. Our results generalise similar lower bounds on bimodal logics over products of two linear relations, and our proof methods are quite different from the proofs of these results. Unlike previous proofs that first `diagonally encode' an infinite grid, and then use reductions of tiling or Turing machine problems, here we make direct use of the grid-like structure of product frames and obtain undecidability by reductions of counter (Minsky) machine problems.
|
2311.05965
|
Kaiyan Zhang
|
Biqing Qi, Kaiyan Zhang, Haoxiang Li, Kai Tian, Sihang Zeng, Zhang-Ren
Chen, Bowen Zhou
|
Large Language Models are Zero Shot Hypothesis Proposers
|
Instruction Workshop @ NeurIPS 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Significant scientific discoveries have driven the progress of human
civilisation. The explosion of scientific literature and data has created
information barriers across disciplines that have slowed the pace of scientific
discovery. Large Language Models (LLMs) hold a wealth of global and
interdisciplinary knowledge that promises to break down these information
barriers and foster a new wave of scientific discovery. However, the potential
of LLMs for scientific discovery has not been formally explored. In this paper,
we start from investigating whether LLMs can propose scientific hypotheses. To
this end, we construct a dataset consist of background knowledge and hypothesis
pairs from biomedical literature. The dataset is divided into training, seen,
and unseen test sets based on the publication date to control visibility. We
subsequently evaluate the hypothesis generation capabilities of various
top-tier instructed models in zero-shot, few-shot, and fine-tuning settings,
including both closed and open-source LLMs. Additionally, we introduce an
LLM-based multi-agent cooperative framework with different role designs and
external tools to enhance the capabilities related to generating hypotheses. We
also design four metrics through a comprehensive review to evaluate the
generated hypotheses for both ChatGPT-based and human evaluations. Through
experiments and analyses, we arrive at the following findings: 1) LLMs
surprisingly generate untrained yet validated hypotheses from testing
literature. 2) Increasing uncertainty facilitates candidate generation,
potentially enhancing zero-shot hypothesis generation capabilities. These
findings strongly support the potential of LLMs as catalysts for new scientific
discoveries and guide further exploration.
|
[
{
"created": "Fri, 10 Nov 2023 10:03:49 GMT",
"version": "v1"
}
] |
2023-11-13
|
[
[
"Qi",
"Biqing",
""
],
[
"Zhang",
"Kaiyan",
""
],
[
"Li",
"Haoxiang",
""
],
[
"Tian",
"Kai",
""
],
[
"Zeng",
"Sihang",
""
],
[
"Chen",
"Zhang-Ren",
""
],
[
"Zhou",
"Bowen",
""
]
] |
Significant scientific discoveries have driven the progress of human civilisation. The explosion of scientific literature and data has created information barriers across disciplines that have slowed the pace of scientific discovery. Large Language Models (LLMs) hold a wealth of global and interdisciplinary knowledge that promises to break down these information barriers and foster a new wave of scientific discovery. However, the potential of LLMs for scientific discovery has not been formally explored. In this paper, we start from investigating whether LLMs can propose scientific hypotheses. To this end, we construct a dataset consist of background knowledge and hypothesis pairs from biomedical literature. The dataset is divided into training, seen, and unseen test sets based on the publication date to control visibility. We subsequently evaluate the hypothesis generation capabilities of various top-tier instructed models in zero-shot, few-shot, and fine-tuning settings, including both closed and open-source LLMs. Additionally, we introduce an LLM-based multi-agent cooperative framework with different role designs and external tools to enhance the capabilities related to generating hypotheses. We also design four metrics through a comprehensive review to evaluate the generated hypotheses for both ChatGPT-based and human evaluations. Through experiments and analyses, we arrive at the following findings: 1) LLMs surprisingly generate untrained yet validated hypotheses from testing literature. 2) Increasing uncertainty facilitates candidate generation, potentially enhancing zero-shot hypothesis generation capabilities. These findings strongly support the potential of LLMs as catalysts for new scientific discoveries and guide further exploration.
|
1303.5712
|
Kuo-Chu Chang
|
Kuo-Chu Chang, Robert Fung
|
Symbolic Probabilistic Inference with Continuous Variables
|
Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991)
| null | null |
UAI-P-1991-PG-77-81
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on Symbolic Probabilistic Inference (SPI) [2, 3] has provided an
algorithm for resolving general queries in Bayesian networks. SPI applies the
concept of dependency directed backward search to probabilistic inference, and
is incremental with respect to both queries and observations. Unlike
traditional Bayesian network inferencing algorithms, SPI algorithm is goal
directed, performing only those calculations that are required to respond to
queries. Research to date on SPI applies to Bayesian networks with
discrete-valued variables and does not address variables with continuous
values. In this papers, we extend the SPI algorithm to handle Bayesian networks
made up of continuous variables where the relationships between the variables
are restricted to be ?linear gaussian?. We call this variation of the SPI
algorithm, SPI Continuous (SPIC). SPIC modifies the three basic SPI operations:
multiplication, summation, and substitution. However, SPIC retains the
framework of the SPI algorithm, namely building the search tree and recursive
query mechanism and therefore retains the goal-directed and incrementality
features of SPI.
|
[
{
"created": "Wed, 20 Mar 2013 15:30:11 GMT",
"version": "v1"
}
] |
2013-03-26
|
[
[
"Chang",
"Kuo-Chu",
""
],
[
"Fung",
"Robert",
""
]
] |
Research on Symbolic Probabilistic Inference (SPI) [2, 3] has provided an algorithm for resolving general queries in Bayesian networks. SPI applies the concept of dependency directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike traditional Bayesian network inferencing algorithms, SPI algorithm is goal directed, performing only those calculations that are required to respond to queries. Research to date on SPI applies to Bayesian networks with discrete-valued variables and does not address variables with continuous values. In this papers, we extend the SPI algorithm to handle Bayesian networks made up of continuous variables where the relationships between the variables are restricted to be ?linear gaussian?. We call this variation of the SPI algorithm, SPI Continuous (SPIC). SPIC modifies the three basic SPI operations: multiplication, summation, and substitution. However, SPIC retains the framework of the SPI algorithm, namely building the search tree and recursive query mechanism and therefore retains the goal-directed and incrementality features of SPI.
|
2210.03630
|
Rongqian Ma
|
Rongqian Ma
|
Boundaries, Extensions, and Challenges of Visualization for Humanities
Data: Reflections on Three Cases
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper discusses problems of visualizing humanities data of various
forms, such as video data, archival data, and numeric-oriented social science
data, with three distinct case studies. By describing the visualization
practices and the issues that emerged from the process, this paper uses the
three cases to each identify a pertinent question for reflection. More
specifically, I reflect on the difficulty, thoughts, and considerations of
choosing the most effective and sufficient forms of visualization to enhance
the expression of specific cultural and humanities data in the projects.
Discussions in this paper concern some questions, such as, how do the
multi-modality of humanities and cultural data challenge the understanding,
roles, and functions of visualizations, and more broadly, visual
representations in humanities research? What do we lose of the original data by
visualizing them in those projects? How to balance the benefits and
disadvantages of visual technologies to display complex, unique, and often
culturally saturated humanities datasets
|
[
{
"created": "Fri, 7 Oct 2022 15:32:49 GMT",
"version": "v1"
}
] |
2022-10-10
|
[
[
"Ma",
"Rongqian",
""
]
] |
This paper discusses problems of visualizing humanities data of various forms, such as video data, archival data, and numeric-oriented social science data, with three distinct case studies. By describing the visualization practices and the issues that emerged from the process, this paper uses the three cases to each identify a pertinent question for reflection. More specifically, I reflect on the difficulty, thoughts, and considerations of choosing the most effective and sufficient forms of visualization to enhance the expression of specific cultural and humanities data in the projects. Discussions in this paper concern some questions, such as, how do the multi-modality of humanities and cultural data challenge the understanding, roles, and functions of visualizations, and more broadly, visual representations in humanities research? What do we lose of the original data by visualizing them in those projects? How to balance the benefits and disadvantages of visual technologies to display complex, unique, and often culturally saturated humanities datasets
|
1703.00446
|
Andjela Draganic
|
Zoja Vulaj, Andjela Draganic, Milos Brajovic, Irena Orovic
|
A tool for ECG signal analysis using standard and optimized Hermite
transform
|
accepted for presentation at the MECO 2017 conference (6th
Mediterranean Conference on Embedded Computing MECO 2017, Bar, Montenegro)
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of a system that would ease the diagnosis of heart diseases
would also fasten the work of the cardiologic department in hospitals and
facilitate the monitoring of patients with portable devices. This paper
presents a tool for ECG signal analysis which is designed in Matlab. The
Hermite transform domain is exploited for the analysis. The proposed transform
domain is very convenient for ECG signal analysis and classification. Parts of
the ECG signals, i.e. QRS complexes, show shape similarity with the Hermite
basis functions, which is one of the reasons for choosing this domain. Also,
the information about the signal can be represented using a small set of
coefficients in this domain, which makes data transmission and analysis faster.
The signal concentration in the Hermite domain and consequently, the number of
samples required for signal representation, can additionally be reduced by
performing the parametization of the Hermite transform. For the comparison
purpose, the Fourier transform domain is also implemented within the software,
in order to compare the signal concentration in two transform domains.
|
[
{
"created": "Wed, 1 Mar 2017 17:00:42 GMT",
"version": "v1"
},
{
"created": "Mon, 8 May 2017 16:56:04 GMT",
"version": "v2"
}
] |
2017-05-09
|
[
[
"Vulaj",
"Zoja",
""
],
[
"Draganic",
"Andjela",
""
],
[
"Brajovic",
"Milos",
""
],
[
"Orovic",
"Irena",
""
]
] |
The development of a system that would ease the diagnosis of heart diseases would also fasten the work of the cardiologic department in hospitals and facilitate the monitoring of patients with portable devices. This paper presents a tool for ECG signal analysis which is designed in Matlab. The Hermite transform domain is exploited for the analysis. The proposed transform domain is very convenient for ECG signal analysis and classification. Parts of the ECG signals, i.e. QRS complexes, show shape similarity with the Hermite basis functions, which is one of the reasons for choosing this domain. Also, the information about the signal can be represented using a small set of coefficients in this domain, which makes data transmission and analysis faster. The signal concentration in the Hermite domain and consequently, the number of samples required for signal representation, can additionally be reduced by performing the parametization of the Hermite transform. For the comparison purpose, the Fourier transform domain is also implemented within the software, in order to compare the signal concentration in two transform domains.
|
2106.14949
|
Damian Moctezuma Enriquez
|
Damian Moctezuma Enriquez, Eduardo Rodarte Leyva
|
Design for a blind stereoscopic picture taker
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
An Schematical Design for an Autonomous Picture taker used for obtaining
Point clouds from pictures taken inside a House. In this case we are proposing
the use of an equation programmed inside an embedded system that will be
tracking the points inside a room and then, open the space between two cameras
of same type in order to take pictures that later will be used to create the
cloud points for the mathematical model that the latter user will apply to that
pictures.
|
[
{
"created": "Mon, 28 Jun 2021 19:14:29 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Enriquez",
"Damian Moctezuma",
""
],
[
"Leyva",
"Eduardo Rodarte",
""
]
] |
An Schematical Design for an Autonomous Picture taker used for obtaining Point clouds from pictures taken inside a House. In this case we are proposing the use of an equation programmed inside an embedded system that will be tracking the points inside a room and then, open the space between two cameras of same type in order to take pictures that later will be used to create the cloud points for the mathematical model that the latter user will apply to that pictures.
|
2302.13797
|
Jianan Zhou
|
Jianan Zhou, Yaoxin Wu, Zhiguang Cao, Wen Song, Jie Zhang, Zhenghua
Chen
|
Learning Large Neighborhood Search for Vehicle Routing in Airport Ground
Handling
|
Accepted by IEEE Transactions on Knowledge and Data Engineering
(TKDE)
| null |
10.1109/TKDE.2023.3249799
| null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dispatching vehicle fleets to serve flights is a key task in airport ground
handling (AGH). Due to the notable growth of flights, it is challenging to
simultaneously schedule multiple types of operations (services) for a large
number of flights, where each type of operation is performed by one specific
vehicle fleet. To tackle this issue, we first represent the operation
scheduling as a complex vehicle routing problem and formulate it as a mixed
integer linear programming (MILP) model. Then given the graph representation of
the MILP model, we propose a learning assisted large neighborhood search (LNS)
method using data generated based on real scenarios, where we integrate
imitation learning and graph convolutional network (GCN) to learn a destroy
operator to automatically select variables, and employ an off-the-shelf solver
as the repair operator to reoptimize the selected variables. Experimental
results based on a real airport show that the proposed method allows for
handling up to 200 flights with 10 types of operations simultaneously, and
outperforms state-of-the-art methods. Moreover, the learned method performs
consistently accompanying different solvers, and generalizes well on larger
instances, verifying the versatility and scalability of our method.
|
[
{
"created": "Mon, 27 Feb 2023 14:16:36 GMT",
"version": "v1"
}
] |
2023-03-01
|
[
[
"Zhou",
"Jianan",
""
],
[
"Wu",
"Yaoxin",
""
],
[
"Cao",
"Zhiguang",
""
],
[
"Song",
"Wen",
""
],
[
"Zhang",
"Jie",
""
],
[
"Chen",
"Zhenghua",
""
]
] |
Dispatching vehicle fleets to serve flights is a key task in airport ground handling (AGH). Due to the notable growth of flights, it is challenging to simultaneously schedule multiple types of operations (services) for a large number of flights, where each type of operation is performed by one specific vehicle fleet. To tackle this issue, we first represent the operation scheduling as a complex vehicle routing problem and formulate it as a mixed integer linear programming (MILP) model. Then given the graph representation of the MILP model, we propose a learning assisted large neighborhood search (LNS) method using data generated based on real scenarios, where we integrate imitation learning and graph convolutional network (GCN) to learn a destroy operator to automatically select variables, and employ an off-the-shelf solver as the repair operator to reoptimize the selected variables. Experimental results based on a real airport show that the proposed method allows for handling up to 200 flights with 10 types of operations simultaneously, and outperforms state-of-the-art methods. Moreover, the learned method performs consistently accompanying different solvers, and generalizes well on larger instances, verifying the versatility and scalability of our method.
|
1801.08583
|
Golshan Golnari
|
Golshan Golnari, Zhi-Li Zhang, Daniel Boley
|
Random Walk Fundamental Tensor and Its Applications to Network Analysis
| null | null | null | null |
cs.DM cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We first present a comprehensive review of various random walk metrics used
in the literature and express them in a consistent framework. We then introduce
fundamental tensor -- a generalization of the well-known fundamental matrix --
and show that classical random walk metrics can be derived from it in a unified
manner. We provide a collection of useful relations for random walk metrics
that are useful and insightful for network studies. To demonstrate the
usefulness and efficacy of the proposed fundamental tensor in network analysis,
we present four important applications: 1) unification of network centrality
measures, 2) characterization of (generalized) network articulation points, 3)
identification of network most influential nodes, and 4) fast computation of
network reachability after failures.
|
[
{
"created": "Thu, 25 Jan 2018 20:01:07 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2018 20:24:46 GMT",
"version": "v2"
},
{
"created": "Sat, 16 Jun 2018 04:35:31 GMT",
"version": "v3"
}
] |
2018-06-19
|
[
[
"Golnari",
"Golshan",
""
],
[
"Zhang",
"Zhi-Li",
""
],
[
"Boley",
"Daniel",
""
]
] |
We first present a comprehensive review of various random walk metrics used in the literature and express them in a consistent framework. We then introduce fundamental tensor -- a generalization of the well-known fundamental matrix -- and show that classical random walk metrics can be derived from it in a unified manner. We provide a collection of useful relations for random walk metrics that are useful and insightful for network studies. To demonstrate the usefulness and efficacy of the proposed fundamental tensor in network analysis, we present four important applications: 1) unification of network centrality measures, 2) characterization of (generalized) network articulation points, 3) identification of network most influential nodes, and 4) fast computation of network reachability after failures.
|
1908.04478
|
EPTCS
|
Thomas Seiller (CNRS, France), Steffen Jost (LMU Munich, Germany)
|
Proceedings Third Joint Workshop on Developments in Implicit
Computational complExity and Foundational & Practical Aspects of Resource
Analysis
| null |
EPTCS 298, 2019
|
10.4204/EPTCS.298
| null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
These proceedings present the accepted regular papers and some selected
extended abstracts from the 3rd joint DICE-FOPARA workshop, which was held in
Prague, Czech Republic on April 6-7, 2019, as a part of ETAPS. The joint
workshop provides synergies by combining two complementary communities:
The 10th DICE workshop explores the area of Implicit Computational Complexity
(ICC), which grew out from several proposals to use logic and formal methods to
provide languages for complexity-bounded computation (e.g. Ptime, Logspace
computation). It aims at studying the computational complexity of programs
without referring to external measuring conditions or a particular machine
model, but only by considering language restrictions or logical/computational
principles entailing complexity properties. Several approaches have been
explored for that purpose, such as restrictions on primitive recursion and
ramification, rewriting systems, linear logic, types and lambda calculus,
interpretations of functional and imperative programs.
The 6th FOPARA workshop serves as a forum for presenting original research
results that are relevant to the analysis of resource (e.g. time, space,
energy) consumption by computer programs. The workshop aims to bring together
the researchers that work on foundational issues with the researchers that
focus more on practical results. Therefore, both theoretical and practical
contributions are encouraged. We also encourage papers that combine theory and
practice.
This third joint DICE-FOPARA workshop at ETAPS 2019 follows the successful
experiences of co-location of DICE-FOPARA at ETAPS 2015 in London and ETAPS
2017 in Uppsala.
|
[
{
"created": "Tue, 13 Aug 2019 04:09:00 GMT",
"version": "v1"
}
] |
2019-08-14
|
[
[
"Seiller",
"Thomas",
"",
"CNRS, France"
],
[
"Jost",
"Steffen",
"",
"LMU Munich, Germany"
]
] |
These proceedings present the accepted regular papers and some selected extended abstracts from the 3rd joint DICE-FOPARA workshop, which was held in Prague, Czech Republic on April 6-7, 2019, as a part of ETAPS. The joint workshop provides synergies by combining two complementary communities: The 10th DICE workshop explores the area of Implicit Computational Complexity (ICC), which grew out from several proposals to use logic and formal methods to provide languages for complexity-bounded computation (e.g. Ptime, Logspace computation). It aims at studying the computational complexity of programs without referring to external measuring conditions or a particular machine model, but only by considering language restrictions or logical/computational principles entailing complexity properties. Several approaches have been explored for that purpose, such as restrictions on primitive recursion and ramification, rewriting systems, linear logic, types and lambda calculus, interpretations of functional and imperative programs. The 6th FOPARA workshop serves as a forum for presenting original research results that are relevant to the analysis of resource (e.g. time, space, energy) consumption by computer programs. The workshop aims to bring together the researchers that work on foundational issues with the researchers that focus more on practical results. Therefore, both theoretical and practical contributions are encouraged. We also encourage papers that combine theory and practice. This third joint DICE-FOPARA workshop at ETAPS 2019 follows the successful experiences of co-location of DICE-FOPARA at ETAPS 2015 in London and ETAPS 2017 in Uppsala.
|
1811.08624
|
Steven Koester
|
Andrew W. Stephan, Jiaxi Hu, and Steven J. Koester
|
Benchmarking Inverse Rashba-Edelstein Magnetoelectric Devices for
Neuromorphic Computing
|
8 pages, 6 figures
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new design for a cellular neural network with spintronic neurons
and CMOS-based synapses. Harnessing the magnetoelectric and inverse
Rashba-Edelstein effects allows natural emulation of the behavior of an ideal
cellular network. This combination of effects offers an increase in speed and
efficiency over other spintronic neural networks. A rigorous performance
analysis via simulation is provided.
|
[
{
"created": "Wed, 21 Nov 2018 08:12:59 GMT",
"version": "v1"
}
] |
2018-11-22
|
[
[
"Stephan",
"Andrew W.",
""
],
[
"Hu",
"Jiaxi",
""
],
[
"Koester",
"Steven J.",
""
]
] |
We propose a new design for a cellular neural network with spintronic neurons and CMOS-based synapses. Harnessing the magnetoelectric and inverse Rashba-Edelstein effects allows natural emulation of the behavior of an ideal cellular network. This combination of effects offers an increase in speed and efficiency over other spintronic neural networks. A rigorous performance analysis via simulation is provided.
|
1106.2325
|
Shujie Hou
|
Shujie Hou, Robert C. Qiu, Zhe Chen, Zhen Hu
|
SVM and Dimensionality Reduction in Cognitive Radio with Experimental
Validation
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a trend of applying machine learning algorithms to cognitive radio.
One fundamental open problem is to determine how and where these algorithms are
useful in a cognitive radio network. In radar and sensing signal processing,
the control of degrees of freedom (DOF)---or dimensionality---is the first
step, called pre-processing. In this paper, the combination of dimensionality
reduction with SVM is proposed apart from only applying SVM for classification
in cognitive radio. Measured Wi-Fi signals with high signal to noise ratio
(SNR) are employed to the experiments. The DOF of Wi-Fi signals is extracted by
dimensionality reduction techniques. Experimental results show that with
dimensionality reduction, the performance of classification is much better with
fewer features than that of without dimensionality reduction. The error rates
of classification with only one feature of the proposed algorithm can match the
error rates of 13 features of the original data. The proposed method will be
further tested in our cognitive radio network testbed.
|
[
{
"created": "Sun, 12 Jun 2011 18:03:06 GMT",
"version": "v1"
}
] |
2011-06-14
|
[
[
"Hou",
"Shujie",
""
],
[
"Qiu",
"Robert C.",
""
],
[
"Chen",
"Zhe",
""
],
[
"Hu",
"Zhen",
""
]
] |
There is a trend of applying machine learning algorithms to cognitive radio. One fundamental open problem is to determine how and where these algorithms are useful in a cognitive radio network. In radar and sensing signal processing, the control of degrees of freedom (DOF)---or dimensionality---is the first step, called pre-processing. In this paper, the combination of dimensionality reduction with SVM is proposed apart from only applying SVM for classification in cognitive radio. Measured Wi-Fi signals with high signal to noise ratio (SNR) are employed to the experiments. The DOF of Wi-Fi signals is extracted by dimensionality reduction techniques. Experimental results show that with dimensionality reduction, the performance of classification is much better with fewer features than that of without dimensionality reduction. The error rates of classification with only one feature of the proposed algorithm can match the error rates of 13 features of the original data. The proposed method will be further tested in our cognitive radio network testbed.
|
2106.00942
|
Kourosh Hakhamaneshi
|
Kourosh Hakhamaneshi, Pieter Abbeel, Vladimir Stojanovic, Aditya
Grover
|
JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The goal of Multi-task Bayesian Optimization (MBO) is to minimize the number
of queries required to accurately optimize a target black-box function, given
access to offline evaluations of other auxiliary functions. When offline
datasets are large, the scalability of prior approaches comes at the expense of
expressivity and inference quality. We propose JUMBO, an MBO algorithm that
sidesteps these limitations by querying additional data based on a combination
of acquisition signals derived from training two Gaussian Processes (GP): a
cold-GP operating directly in the input domain and a warm-GP that operates in
the feature space of a deep neural network pretrained using the offline data.
Such a decomposition can dynamically control the reliability of information
derived from the online and offline data and the use of pretrained neural
networks permits scalability to large offline datasets. Theoretically, we
derive regret bounds for JUMBO and show that it achieves no-regret under
conditions analogous to GP-UCB (Srinivas et. al. 2010). Empirically, we
demonstrate significant performance improvements over existing approaches on
two real-world optimization problems: hyper-parameter optimization and
automated circuit design.
|
[
{
"created": "Wed, 2 Jun 2021 05:03:38 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Mar 2022 17:53:59 GMT",
"version": "v2"
}
] |
2022-03-11
|
[
[
"Hakhamaneshi",
"Kourosh",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Stojanovic",
"Vladimir",
""
],
[
"Grover",
"Aditya",
""
]
] |
The goal of Multi-task Bayesian Optimization (MBO) is to minimize the number of queries required to accurately optimize a target black-box function, given access to offline evaluations of other auxiliary functions. When offline datasets are large, the scalability of prior approaches comes at the expense of expressivity and inference quality. We propose JUMBO, an MBO algorithm that sidesteps these limitations by querying additional data based on a combination of acquisition signals derived from training two Gaussian Processes (GP): a cold-GP operating directly in the input domain and a warm-GP that operates in the feature space of a deep neural network pretrained using the offline data. Such a decomposition can dynamically control the reliability of information derived from the online and offline data and the use of pretrained neural networks permits scalability to large offline datasets. Theoretically, we derive regret bounds for JUMBO and show that it achieves no-regret under conditions analogous to GP-UCB (Srinivas et. al. 2010). Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems: hyper-parameter optimization and automated circuit design.
|
1702.07205
|
Waldemar Koczkodaj Prof.
|
W.W. Koczkodaj and J.-P. Magnot and J. Mazurek and J.F. Peters and H.
Rakhshani and M. Soltys and D. Strza{\l}ka and J. Szybowski and A. Tozzi
|
On normalization of inconsistency indicators in pairwise comparisons
|
15 pages, 3 figures
| null | null | null |
cs.DM
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this study, we provide mathematical and practice-driven justification for
using $[0,1]$ normalization of inconsistency indicators in pairwise
comparisons. The need for normalization, as well as problems with the lack of
normalization, are presented. A new type of paradox of infinity is described.
|
[
{
"created": "Thu, 23 Feb 2017 13:33:51 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Feb 2017 17:48:19 GMT",
"version": "v2"
}
] |
2017-02-28
|
[
[
"Koczkodaj",
"W. W.",
""
],
[
"Magnot",
"J. -P.",
""
],
[
"Mazurek",
"J.",
""
],
[
"Peters",
"J. F.",
""
],
[
"Rakhshani",
"H.",
""
],
[
"Soltys",
"M.",
""
],
[
"Strzałka",
"D.",
""
],
[
"Szybowski",
"J.",
""
],
[
"Tozzi",
"A.",
""
]
] |
In this study, we provide mathematical and practice-driven justification for using $[0,1]$ normalization of inconsistency indicators in pairwise comparisons. The need for normalization, as well as problems with the lack of normalization, are presented. A new type of paradox of infinity is described.
|
2210.17505
|
Roberto Casadei
|
Roberto Casadei, Stefano Mariani, Danilo Pianini, Mirko Viroli, Franco
Zambonelli
|
Space-Fluid Adaptive Sampling by Self-Organisation
| null |
Logical Methods in Computer Science, Volume 19, Issue 4 (December
18, 2023) lmcs:10233
|
10.46298/lmcs-19(4:29)2023
| null |
cs.DC cs.AI cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
A recurrent task in coordinated systems is managing (estimating, predicting,
or controlling) signals that vary in space, such as distributed sensed data or
computation outcomes. Especially in large-scale settings, the problem can be
addressed through decentralised and situated computing systems: nodes can
locally sense, process, and act upon signals, and coordinate with neighbours to
implement collective strategies. Accordingly, in this work we devise
distributed coordination strategies for the estimation of a spatial phenomenon
through collaborative adaptive sampling. Our design is based on the idea of
dynamically partitioning space into regions that compete and grow/shrink to
provide accurate aggregate sampling. Such regions hence define a sort of
virtualised space that is "fluid", since its structure adapts in response to
pressure forces exerted by the underlying phenomenon. We provide an adaptive
sampling algorithm in the field-based coordination framework, and prove it is
self-stabilising and locally optimal. Finally, we verify by simulation that the
proposed algorithm effectively carries out a spatially adaptive sampling while
maintaining a tuneable trade-off between accuracy and efficiency.
|
[
{
"created": "Mon, 31 Oct 2022 17:29:41 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2023 17:31:22 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Aug 2023 07:38:51 GMT",
"version": "v3"
},
{
"created": "Thu, 5 Oct 2023 10:46:56 GMT",
"version": "v4"
},
{
"created": "Fri, 15 Dec 2023 11:15:25 GMT",
"version": "v5"
}
] |
2024-02-14
|
[
[
"Casadei",
"Roberto",
""
],
[
"Mariani",
"Stefano",
""
],
[
"Pianini",
"Danilo",
""
],
[
"Viroli",
"Mirko",
""
],
[
"Zambonelli",
"Franco",
""
]
] |
A recurrent task in coordinated systems is managing (estimating, predicting, or controlling) signals that vary in space, such as distributed sensed data or computation outcomes. Especially in large-scale settings, the problem can be addressed through decentralised and situated computing systems: nodes can locally sense, process, and act upon signals, and coordinate with neighbours to implement collective strategies. Accordingly, in this work we devise distributed coordination strategies for the estimation of a spatial phenomenon through collaborative adaptive sampling. Our design is based on the idea of dynamically partitioning space into regions that compete and grow/shrink to provide accurate aggregate sampling. Such regions hence define a sort of virtualised space that is "fluid", since its structure adapts in response to pressure forces exerted by the underlying phenomenon. We provide an adaptive sampling algorithm in the field-based coordination framework, and prove it is self-stabilising and locally optimal. Finally, we verify by simulation that the proposed algorithm effectively carries out a spatially adaptive sampling while maintaining a tuneable trade-off between accuracy and efficiency.
|
2107.12235
|
Marco De Nadai
|
Lorenzo Lucchini, Simone Centellegher, Luca Pappalardo, Riccardo
Gallotti, Filippo Privitera, Bruno Lepri and Marco De Nadai
|
Living in a pandemic: adaptation of individual mobility and social
activity in the US
| null | null | null | null |
cs.CY cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The non-pharmaceutical interventions (NPIs), aimed at reducing the diffusion
of the COVID-19 pandemic, has dramatically influenced our behaviour in everyday
life. In this work, we study how individuals adapted their daily movements and
person-to-person contact patterns over time in response to the COVID-19
pandemic and the NPIs. We leverage longitudinal GPS mobility data of hundreds
of thousands of anonymous individuals in four US states and empirically show
the dramatic disruption in people's life. We find that local interventions did
not just impact the number of visits to different venues but also how people
experience them. Individuals spend less time in venues, preferring simpler and
more predictable routines and reducing person-to-person contact activities.
Moreover, we show that the stringency of interventions alone does explain the
number and duration of visits to venues: individual patterns of visits seem to
be influenced by the local severity of the pandemic and a risk adaptation
factor, which increases the people's mobility regardless of the stringency of
interventions.
|
[
{
"created": "Mon, 26 Jul 2021 14:27:22 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Aug 2021 13:18:43 GMT",
"version": "v2"
}
] |
2021-08-18
|
[
[
"Lucchini",
"Lorenzo",
""
],
[
"Centellegher",
"Simone",
""
],
[
"Pappalardo",
"Luca",
""
],
[
"Gallotti",
"Riccardo",
""
],
[
"Privitera",
"Filippo",
""
],
[
"Lepri",
"Bruno",
""
],
[
"De Nadai",
"Marco",
""
]
] |
The non-pharmaceutical interventions (NPIs), aimed at reducing the diffusion of the COVID-19 pandemic, has dramatically influenced our behaviour in everyday life. In this work, we study how individuals adapted their daily movements and person-to-person contact patterns over time in response to the COVID-19 pandemic and the NPIs. We leverage longitudinal GPS mobility data of hundreds of thousands of anonymous individuals in four US states and empirically show the dramatic disruption in people's life. We find that local interventions did not just impact the number of visits to different venues but also how people experience them. Individuals spend less time in venues, preferring simpler and more predictable routines and reducing person-to-person contact activities. Moreover, we show that the stringency of interventions alone does explain the number and duration of visits to venues: individual patterns of visits seem to be influenced by the local severity of the pandemic and a risk adaptation factor, which increases the people's mobility regardless of the stringency of interventions.
|
2402.13858
|
Qiang Huang
|
Qiang Huang, Yanhao Wang, Yiqun Sun, Anthony K. H. Tung
|
Diversity-Aware $k$-Maximum Inner Product Search Revisited
|
14 pages, 9 figures, and 5 tables
| null | null | null |
cs.IR cs.DB cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The $k$-Maximum Inner Product Search ($k$MIPS) serves as a foundational
component in recommender systems and various data mining tasks. However, while
most existing $k$MIPS approaches prioritize the efficient retrieval of highly
relevant items for users, they often neglect an equally pivotal facet of search
results: \emph{diversity}. To bridge this gap, we revisit and refine the
diversity-aware $k$MIPS (D$k$MIPS) problem by incorporating two well-known
diversity objectives -- minimizing the average and maximum pairwise item
similarities within the results -- into the original relevance objective. This
enhancement, inspired by Maximal Marginal Relevance (MMR), offers users a
controllable trade-off between relevance and diversity. We introduce
\textsc{Greedy} and \textsc{DualGreedy}, two linear scan-based algorithms
tailored for D$k$MIPS. They both achieve data-dependent approximations and,
when aiming to minimize the average pairwise similarity, \textsc{DualGreedy}
attains an approximation ratio of $1/4$ with an additive term for
regularization. To further improve query efficiency, we integrate a lightweight
Ball-Cone Tree (BC-Tree) index with the two algorithms. Finally, comprehensive
experiments on ten real-world data sets demonstrate the efficacy of our
proposed methods, showcasing their capability to efficiently deliver diverse
and relevant search results to users.
|
[
{
"created": "Wed, 21 Feb 2024 15:09:51 GMT",
"version": "v1"
}
] |
2024-02-22
|
[
[
"Huang",
"Qiang",
""
],
[
"Wang",
"Yanhao",
""
],
[
"Sun",
"Yiqun",
""
],
[
"Tung",
"Anthony K. H.",
""
]
] |
The $k$-Maximum Inner Product Search ($k$MIPS) serves as a foundational component in recommender systems and various data mining tasks. However, while most existing $k$MIPS approaches prioritize the efficient retrieval of highly relevant items for users, they often neglect an equally pivotal facet of search results: \emph{diversity}. To bridge this gap, we revisit and refine the diversity-aware $k$MIPS (D$k$MIPS) problem by incorporating two well-known diversity objectives -- minimizing the average and maximum pairwise item similarities within the results -- into the original relevance objective. This enhancement, inspired by Maximal Marginal Relevance (MMR), offers users a controllable trade-off between relevance and diversity. We introduce \textsc{Greedy} and \textsc{DualGreedy}, two linear scan-based algorithms tailored for D$k$MIPS. They both achieve data-dependent approximations and, when aiming to minimize the average pairwise similarity, \textsc{DualGreedy} attains an approximation ratio of $1/4$ with an additive term for regularization. To further improve query efficiency, we integrate a lightweight Ball-Cone Tree (BC-Tree) index with the two algorithms. Finally, comprehensive experiments on ten real-world data sets demonstrate the efficacy of our proposed methods, showcasing their capability to efficiently deliver diverse and relevant search results to users.
|
1901.09097
|
Hesham Mohamed Eraqi
|
Hesham M. Eraqi, Yehya Abouelnaga, Mohamed H. Saad, Mohamed N.
Moustafa
|
Driver Distraction Identification with an Ensemble of Convolutional
Neural Networks
|
arXiv admin note: substantial text overlap with arXiv:1706.09498
|
Journal of Advanced Transportation, Machine Learning in
Transportation (MLT) Issue, 2019
| null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.
|
[
{
"created": "Tue, 22 Jan 2019 10:47:00 GMT",
"version": "v1"
}
] |
2019-01-29
|
[
[
"Eraqi",
"Hesham M.",
""
],
[
"Abouelnaga",
"Yehya",
""
],
[
"Saad",
"Mohamed H.",
""
],
[
"Moustafa",
"Mohamed N.",
""
]
] |
The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad-hoc methods are often used.In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically-weighted ensemble of convolutional neural networks, we show that a weighted ensemble of classifiers using a genetic algorithm yields in a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.
|
2204.10606
|
Yuezun Li
|
Xianglong and Yuezun Li and Haipeng Qu and Junyu Dong
|
Enhancing the Transferability via Feature-Momentum Adversarial Attack
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transferable adversarial attack has drawn increasing attention due to their
practical threaten to real-world applications. In particular, the feature-level
adversarial attack is one recent branch that can enhance the transferability
via disturbing the intermediate features. The existing methods usually create a
guidance map for features, where the value indicates the importance of the
corresponding feature element and then employs an iterative algorithm to
disrupt the features accordingly. However, the guidance map is fixed in
existing methods, which can not consistently reflect the behavior of networks
as the image is changed during iteration. In this paper, we describe a new
method called Feature-Momentum Adversarial Attack (FMAA) to further improve
transferability. The key idea of our method is that we estimate a guidance map
dynamically at each iteration using momentum to effectively disturb the
category-relevant features. Extensive experiments demonstrate that our method
significantly outperforms other state-of-the-art methods by a large margin on
different target models.
|
[
{
"created": "Fri, 22 Apr 2022 09:52:49 GMT",
"version": "v1"
}
] |
2022-04-25
|
[
[
"Xianglong",
"",
""
],
[
"Li",
"Yuezun",
""
],
[
"Qu",
"Haipeng",
""
],
[
"Dong",
"Junyu",
""
]
] |
Transferable adversarial attack has drawn increasing attention due to their practical threaten to real-world applications. In particular, the feature-level adversarial attack is one recent branch that can enhance the transferability via disturbing the intermediate features. The existing methods usually create a guidance map for features, where the value indicates the importance of the corresponding feature element and then employs an iterative algorithm to disrupt the features accordingly. However, the guidance map is fixed in existing methods, which can not consistently reflect the behavior of networks as the image is changed during iteration. In this paper, we describe a new method called Feature-Momentum Adversarial Attack (FMAA) to further improve transferability. The key idea of our method is that we estimate a guidance map dynamically at each iteration using momentum to effectively disturb the category-relevant features. Extensive experiments demonstrate that our method significantly outperforms other state-of-the-art methods by a large margin on different target models.
|
2403.14083
|
Thejan Rajapakshe
|
Thejan Rajapakshe, Rajib Rana, Sara Khalifa, Berrak Sisman, Bjorn W.
Schuller, Carlos Busso
|
emoDARTS: Joint Optimisation of CNN & Sequential Neural Network
Architectures for Superior Speech Emotion Recognition
|
Submitted to IEEE Transactions on Affective Computing on February 19,
2024. arXiv admin note: text overlap with arXiv:2305.14402
| null |
10.1109/ACCESS.2024.3439604
| null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Speech Emotion Recognition (SER) is crucial for enabling computers to
understand the emotions conveyed in human communication. With recent
advancements in Deep Learning (DL), the performance of SER models has
significantly improved. However, designing an optimal DL architecture requires
specialised knowledge and experimental assessments. Fortunately, Neural
Architecture Search (NAS) provides a potential solution for automatically
determining the best DL model. The Differentiable Architecture Search (DARTS)
is a particularly efficient method for discovering optimal models. This study
presents emoDARTS, a DARTS-optimised joint CNN and Sequential Neural Network
(SeqNN: LSTM, RNN) architecture that enhances SER performance. The literature
supports the selection of CNN and LSTM coupling to improve performance.
While DARTS has previously been used to choose CNN and LSTM operations
independently, our technique adds a novel mechanism for selecting CNN and SeqNN
operations in conjunction using DARTS. Unlike earlier work, we do not impose
limits on the layer order of the CNN. Instead, we let DARTS choose the best
layer order inside the DARTS cell. We demonstrate that emoDARTS outperforms
conventionally designed CNN-LSTM models and surpasses the best-reported SER
results achieved through DARTS on CNN-LSTM by evaluating our approach on the
IEMOCAP, MSP-IMPROV, and MSP-Podcast datasets.
|
[
{
"created": "Thu, 21 Mar 2024 02:26:30 GMT",
"version": "v1"
}
] |
2024-08-09
|
[
[
"Rajapakshe",
"Thejan",
""
],
[
"Rana",
"Rajib",
""
],
[
"Khalifa",
"Sara",
""
],
[
"Sisman",
"Berrak",
""
],
[
"Schuller",
"Bjorn W.",
""
],
[
"Busso",
"Carlos",
""
]
] |
Speech Emotion Recognition (SER) is crucial for enabling computers to understand the emotions conveyed in human communication. With recent advancements in Deep Learning (DL), the performance of SER models has significantly improved. However, designing an optimal DL architecture requires specialised knowledge and experimental assessments. Fortunately, Neural Architecture Search (NAS) provides a potential solution for automatically determining the best DL model. The Differentiable Architecture Search (DARTS) is a particularly efficient method for discovering optimal models. This study presents emoDARTS, a DARTS-optimised joint CNN and Sequential Neural Network (SeqNN: LSTM, RNN) architecture that enhances SER performance. The literature supports the selection of CNN and LSTM coupling to improve performance. While DARTS has previously been used to choose CNN and LSTM operations independently, our technique adds a novel mechanism for selecting CNN and SeqNN operations in conjunction using DARTS. Unlike earlier work, we do not impose limits on the layer order of the CNN. Instead, we let DARTS choose the best layer order inside the DARTS cell. We demonstrate that emoDARTS outperforms conventionally designed CNN-LSTM models and surpasses the best-reported SER results achieved through DARTS on CNN-LSTM by evaluating our approach on the IEMOCAP, MSP-IMPROV, and MSP-Podcast datasets.
|
2310.19626
|
Gengchen Mai
|
Zhengliang Liu, Yiwei Li, Qian Cao, Junwen Chen, Tianze Yang, Zihao
Wu, John Hale, John Gibbs, Khaled Rasheed, Ninghao Liu, Gengchen Mai, and
Tianming Liu
|
Transformation vs Tradition: Artificial General Intelligence (AGI) for
Arts and Humanities
| null | null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Recent advances in artificial general intelligence (AGI), particularly large
language models and creative image generation systems have demonstrated
impressive capabilities on diverse tasks spanning the arts and humanities.
However, the swift evolution of AGI has also raised critical questions about
its responsible deployment in these culturally significant domains
traditionally seen as profoundly human. This paper provides a comprehensive
analysis of the applications and implications of AGI for text, graphics, audio,
and video pertaining to arts and the humanities. We survey cutting-edge systems
and their usage in areas ranging from poetry to history, marketing to film, and
communication to classical art. We outline substantial concerns pertaining to
factuality, toxicity, biases, and public safety in AGI systems, and propose
mitigation strategies. The paper argues for multi-stakeholder collaboration to
ensure AGI promotes creativity, knowledge, and cultural values without
undermining truth or human dignity. Our timely contribution summarizes a
rapidly developing field, highlighting promising directions while advocating
for responsible progress centering on human flourishing. The analysis lays the
groundwork for further research on aligning AGI's technological capacities with
enduring social goods.
|
[
{
"created": "Mon, 30 Oct 2023 15:19:15 GMT",
"version": "v1"
}
] |
2023-10-31
|
[
[
"Liu",
"Zhengliang",
""
],
[
"Li",
"Yiwei",
""
],
[
"Cao",
"Qian",
""
],
[
"Chen",
"Junwen",
""
],
[
"Yang",
"Tianze",
""
],
[
"Wu",
"Zihao",
""
],
[
"Hale",
"John",
""
],
[
"Gibbs",
"John",
""
],
[
"Rasheed",
"Khaled",
""
],
[
"Liu",
"Ninghao",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Liu",
"Tianming",
""
]
] |
Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities. However, the swift evolution of AGI has also raised critical questions about its responsible deployment in these culturally significant domains traditionally seen as profoundly human. This paper provides a comprehensive analysis of the applications and implications of AGI for text, graphics, audio, and video pertaining to arts and the humanities. We survey cutting-edge systems and their usage in areas ranging from poetry to history, marketing to film, and communication to classical art. We outline substantial concerns pertaining to factuality, toxicity, biases, and public safety in AGI systems, and propose mitigation strategies. The paper argues for multi-stakeholder collaboration to ensure AGI promotes creativity, knowledge, and cultural values without undermining truth or human dignity. Our timely contribution summarizes a rapidly developing field, highlighting promising directions while advocating for responsible progress centering on human flourishing. The analysis lays the groundwork for further research on aligning AGI's technological capacities with enduring social goods.
|
2305.17671
|
Benjamin Bisping
|
Benjamin Bisping, David N. Jansen
|
Linear-Time--Branching-Time Spectroscopy Accounting for Silent Steps
| null | null | null | null |
cs.LO cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
We provide the first generalized game characterization of van Glabbeek's
linear-time--branching-time spectrum with silent steps. Thereby, one
multi-dimensional energy game can be used to decide a wide array of behavioral
equivalences between stability-respecting branching bisimilarity and weak trace
equivalence in one go. To establish correctness, we relate attacker-winning
energy budgets and distinguishing sublanguages of Hennessy--Milner logic
characterized by eight dimensions of formula expressiveness.
|
[
{
"created": "Sun, 28 May 2023 09:27:02 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 16:06:57 GMT",
"version": "v2"
}
] |
2023-10-18
|
[
[
"Bisping",
"Benjamin",
""
],
[
"Jansen",
"David N.",
""
]
] |
We provide the first generalized game characterization of van Glabbeek's linear-time--branching-time spectrum with silent steps. Thereby, one multi-dimensional energy game can be used to decide a wide array of behavioral equivalences between stability-respecting branching bisimilarity and weak trace equivalence in one go. To establish correctness, we relate attacker-winning energy budgets and distinguishing sublanguages of Hennessy--Milner logic characterized by eight dimensions of formula expressiveness.
|
2004.03096
|
Yiming Cui
|
Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu
|
Is Graph Structure Necessary for Multi-hop Question Answering?
|
6 pages, to appear at EMNLP 2020
| null |
10.18653/v1/2020.emnlp-main.583
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, attempting to model texts as graph structure and introducing graph
neural networks to deal with it has become a trend in many NLP research areas.
In this paper, we investigate whether the graph structure is necessary for
multi-hop question answering. Our analysis is centered on HotpotQA. We
construct a strong baseline model to establish that, with the proper use of
pre-trained models, graph structure may not be necessary for multi-hop question
answering. We point out that both graph structure and adjacency matrix are
task-related prior knowledge, and graph-attention can be considered as a
special case of self-attention. Experiments and visualized analysis demonstrate
that graph-attention or the entire graph structure can be replaced by
self-attention or Transformers.
|
[
{
"created": "Tue, 7 Apr 2020 02:59:42 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 09:29:19 GMT",
"version": "v2"
}
] |
2020-12-14
|
[
[
"Shao",
"Nan",
""
],
[
"Cui",
"Yiming",
""
],
[
"Liu",
"Ting",
""
],
[
"Wang",
"Shijin",
""
],
[
"Hu",
"Guoping",
""
]
] |
Recently, attempting to model texts as graph structure and introducing graph neural networks to deal with it has become a trend in many NLP research areas. In this paper, we investigate whether the graph structure is necessary for multi-hop question answering. Our analysis is centered on HotpotQA. We construct a strong baseline model to establish that, with the proper use of pre-trained models, graph structure may not be necessary for multi-hop question answering. We point out that both graph structure and adjacency matrix are task-related prior knowledge, and graph-attention can be considered as a special case of self-attention. Experiments and visualized analysis demonstrate that graph-attention or the entire graph structure can be replaced by self-attention or Transformers.
|
1811.02465
|
Gennaro Notomista
|
Gennaro Notomista and Magnus Egerstedt
|
Constraint-Driven Coordinated Control of Multi-Robot Systems
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a reformulation--framed as a constrained
optimization problem--of multi-robot tasks which are encoded through a cost
function that is to be minimized. The advantages of this approach are multiple.
The constraint-based formulation provides a natural way of enabling long-term
robot autonomy applications, where resilience and adaptability to changing
environmental conditions are essential. Moreover, under certain assumptions on
the cost function, the resulting controller is guaranteed to be decentralized.
Furthermore, finite-time convergence can be achieved, while using local
information only, and therefore preserving the decentralized nature of the
algorithm. The developed control framework has been tested on a team of ground
mobile robots implementing long-term environmental monitoring.
|
[
{
"created": "Sun, 4 Nov 2018 23:27:48 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Sep 2019 21:48:09 GMT",
"version": "v2"
}
] |
2019-09-04
|
[
[
"Notomista",
"Gennaro",
""
],
[
"Egerstedt",
"Magnus",
""
]
] |
In this paper we present a reformulation--framed as a constrained optimization problem--of multi-robot tasks which are encoded through a cost function that is to be minimized. The advantages of this approach are multiple. The constraint-based formulation provides a natural way of enabling long-term robot autonomy applications, where resilience and adaptability to changing environmental conditions are essential. Moreover, under certain assumptions on the cost function, the resulting controller is guaranteed to be decentralized. Furthermore, finite-time convergence can be achieved, while using local information only, and therefore preserving the decentralized nature of the algorithm. The developed control framework has been tested on a team of ground mobile robots implementing long-term environmental monitoring.
|
2101.07423
|
G\"ozde \"Ozcan
|
G\"ozde \"Ozcan, Armin Moharrer, Stratis Ioannidis
|
Submodular Maximization via Taylor Series Approximation
|
15 pages, 2 figures, to be published in the SIAM International
Conference on Data Mining proceedings (SDM 2021)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We study submodular maximization problems with matroid constraints, in
particular, problems where the objective can be expressed via compositions of
analytic and multilinear functions. We show that for functions of this form,
the so-called continuous greedy algorithm attains a ratio arbitrarily close to
$(1-1/e) \approx 0.63$ using a deterministic estimation via Taylor series
approximation. This drastically reduces execution time over prior art that uses
sampling.
|
[
{
"created": "Tue, 19 Jan 2021 02:41:45 GMT",
"version": "v1"
}
] |
2021-01-20
|
[
[
"Özcan",
"Gözde",
""
],
[
"Moharrer",
"Armin",
""
],
[
"Ioannidis",
"Stratis",
""
]
] |
We study submodular maximization problems with matroid constraints, in particular, problems where the objective can be expressed via compositions of analytic and multilinear functions. We show that for functions of this form, the so-called continuous greedy algorithm attains a ratio arbitrarily close to $(1-1/e) \approx 0.63$ using a deterministic estimation via Taylor series approximation. This drastically reduces execution time over prior art that uses sampling.
|
2005.14156
|
Mohammad Sina Kiarostami
|
Mohammad Sina Kiarostami and Saleh Khalaj Monfared and Mohammadreza
Daneshvaramoli and Ali Oliayi and Negar Yousefian and Dara Rahmati and Saeid
Gorgin
|
Unlucky Explorer: A Complete non-Overlapping Map Exploration
| null | null | null | null |
cs.AI cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Nowadays, the field of Artificial Intelligence in Computer Games (AI in
Games) is going to be more alluring since computer games challenge many aspects
of AI with a wide range of problems, particularly general problems. One of
these kinds of problems is Exploration, which states that an unknown
environment must be explored by one or several agents. In this work, we have
first introduced the Maze Dash puzzle as an exploration problem where the agent
must find a Hamiltonian Path visiting all the cells. Then, we have investigated
to find suitable methods by a focus on Monte-Carlo Tree Search (MCTS) and SAT
to solve this puzzle quickly and accurately. An optimization has been applied
to the proposed MCTS algorithm to obtain a promising result. Also, since the
prefabricated test cases of this puzzle are not large enough to assay the
proposed method, we have proposed and employed a technique to generate solvable
test cases to evaluate the approaches. Eventually, the MCTS-based method has
been assessed by the auto-generated test cases and compared with our
implemented SAT approach that is considered a good rival. Our comparison
indicates that the MCTS-based approach is an up-and-coming method that could
cope with the test cases with small and medium sizes with faster run-time
compared to SAT. However, for certain discussed reasons, including the features
of the problem, tree search organization, and also the approach of MCTS in the
Simulation step, MCTS takes more time to execute in Large size scenarios.
Consequently, we have found the bottleneck for the MCTS-based method in
significant test cases that could be improved in two real-world problems.
|
[
{
"created": "Thu, 28 May 2020 17:19:24 GMT",
"version": "v1"
}
] |
2020-05-29
|
[
[
"Kiarostami",
"Mohammad Sina",
""
],
[
"Monfared",
"Saleh Khalaj",
""
],
[
"Daneshvaramoli",
"Mohammadreza",
""
],
[
"Oliayi",
"Ali",
""
],
[
"Yousefian",
"Negar",
""
],
[
"Rahmati",
"Dara",
""
],
[
"Gorgin",
"Saeid",
""
]
] |
Nowadays, the field of Artificial Intelligence in Computer Games (AI in Games) is going to be more alluring since computer games challenge many aspects of AI with a wide range of problems, particularly general problems. One of these kinds of problems is Exploration, which states that an unknown environment must be explored by one or several agents. In this work, we have first introduced the Maze Dash puzzle as an exploration problem where the agent must find a Hamiltonian Path visiting all the cells. Then, we have investigated to find suitable methods by a focus on Monte-Carlo Tree Search (MCTS) and SAT to solve this puzzle quickly and accurately. An optimization has been applied to the proposed MCTS algorithm to obtain a promising result. Also, since the prefabricated test cases of this puzzle are not large enough to assay the proposed method, we have proposed and employed a technique to generate solvable test cases to evaluate the approaches. Eventually, the MCTS-based method has been assessed by the auto-generated test cases and compared with our implemented SAT approach that is considered a good rival. Our comparison indicates that the MCTS-based approach is an up-and-coming method that could cope with the test cases with small and medium sizes with faster run-time compared to SAT. However, for certain discussed reasons, including the features of the problem, tree search organization, and also the approach of MCTS in the Simulation step, MCTS takes more time to execute in Large size scenarios. Consequently, we have found the bottleneck for the MCTS-based method in significant test cases that could be improved in two real-world problems.
|
2010.00516
|
Meenakshi Khosla
|
Meenakshi Khosla, Gia H. Ngo, Keith Jamison, Amy Kuceyeski and Mert R.
Sabuncu
|
Neural encoding with visual attention
|
NeurIPS 2020
| null | null | null |
cs.CV cs.LG q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual perception is critically influenced by the focus of attention. Due to
limited resources, it is well known that neural representations are biased in
favor of attended locations. Using concurrent eye-tracking and functional
Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human
subjects watching movies, we first demonstrate that leveraging gaze
information, in the form of attentional masking, can significantly improve
brain response prediction accuracy in a neural encoding model. Next, we propose
a novel approach to neural encoding by including a trainable soft-attention
module. Using our new approach, we demonstrate that it is possible to learn
visual attention policies by end-to-end learning merely on fMRI response data,
and without relying on any eye-tracking. Interestingly, we find that attention
locations estimated by the model on independent data agree well with the
corresponding eye fixation patterns, despite no explicit supervision to do so.
Together, these findings suggest that attention modules can be instrumental in
neural encoding models of visual stimuli.
|
[
{
"created": "Thu, 1 Oct 2020 16:04:21 GMT",
"version": "v1"
}
] |
2020-10-02
|
[
[
"Khosla",
"Meenakshi",
""
],
[
"Ngo",
"Gia H.",
""
],
[
"Jamison",
"Keith",
""
],
[
"Kuceyeski",
"Amy",
""
],
[
"Sabuncu",
"Mert R.",
""
]
] |
Visual perception is critically influenced by the focus of attention. Due to limited resources, it is well known that neural representations are biased in favor of attended locations. Using concurrent eye-tracking and functional Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human subjects watching movies, we first demonstrate that leveraging gaze information, in the form of attentional masking, can significantly improve brain response prediction accuracy in a neural encoding model. Next, we propose a novel approach to neural encoding by including a trainable soft-attention module. Using our new approach, we demonstrate that it is possible to learn visual attention policies by end-to-end learning merely on fMRI response data, and without relying on any eye-tracking. Interestingly, we find that attention locations estimated by the model on independent data agree well with the corresponding eye fixation patterns, despite no explicit supervision to do so. Together, these findings suggest that attention modules can be instrumental in neural encoding models of visual stimuli.
|
2108.13817
|
Pengfei Zhu
|
Pengfei Zhu and Xiaoguang Li and Jian Li and Hai Zhao
|
Unsupervised Open-Domain Question Answering
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Open-domain Question Answering (ODQA) has achieved significant results in
terms of supervised learning manner. However, data annotation cannot also be
irresistible for its huge demand in an open domain. Though unsupervised QA or
unsupervised Machine Reading Comprehension (MRC) has been tried more or less,
unsupervised ODQA has not been touched according to our best knowledge. This
paper thus pioneers the work of unsupervised ODQA by formally introducing the
task and proposing a series of key data construction methods. Our exploration
in this work inspiringly shows unsupervised ODQA can reach up to 86%
performance of supervised ones.
|
[
{
"created": "Tue, 31 Aug 2021 13:19:52 GMT",
"version": "v1"
}
] |
2021-09-01
|
[
[
"Zhu",
"Pengfei",
""
],
[
"Li",
"Xiaoguang",
""
],
[
"Li",
"Jian",
""
],
[
"Zhao",
"Hai",
""
]
] |
Open-domain Question Answering (ODQA) has achieved significant results in terms of supervised learning manner. However, data annotation cannot also be irresistible for its huge demand in an open domain. Though unsupervised QA or unsupervised Machine Reading Comprehension (MRC) has been tried more or less, unsupervised ODQA has not been touched according to our best knowledge. This paper thus pioneers the work of unsupervised ODQA by formally introducing the task and proposing a series of key data construction methods. Our exploration in this work inspiringly shows unsupervised ODQA can reach up to 86% performance of supervised ones.
|
2111.03550
|
Luis Miguel Contreras Murillo
|
Luis M. Contreras, Jose A. Ordonez-Lucena
|
On Slice Isolation Options in the Transport Network and Associated
Feasibility Indicators
|
5 pages, 2 tables, 1 figure, conference
| null |
10.1109/NetSoft51509.2021.9492546
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Isolation is one of the more relevant attributes associated to the idea of
network slicing, introduced by 5G services. Through isolation it is expected
that slices from different customers could gracefully coexist without
interfering each other, in the sense that whatever misbehavior or unforeseen
demand from one slice customer could not affect the communication service
received by any other slice customer supported atop the same physical transport
infrastructure. This paper surveys and compare different technical approaches
that can be taken for providing distinct isolation levels in the transport
network, as a major component of end-to-end network slices. Furthermore, a
number of isolation feasibility indicators are defined and proposed. These
indicators are based on the approaches referred before, as a mean of guiding
orchestration decisions at the time of provisioning or reconfiguring the
transport slices in the network.
|
[
{
"created": "Fri, 5 Nov 2021 15:14:54 GMT",
"version": "v1"
}
] |
2021-11-08
|
[
[
"Contreras",
"Luis M.",
""
],
[
"Ordonez-Lucena",
"Jose A.",
""
]
] |
Isolation is one of the more relevant attributes associated to the idea of network slicing, introduced by 5G services. Through isolation it is expected that slices from different customers could gracefully coexist without interfering each other, in the sense that whatever misbehavior or unforeseen demand from one slice customer could not affect the communication service received by any other slice customer supported atop the same physical transport infrastructure. This paper surveys and compare different technical approaches that can be taken for providing distinct isolation levels in the transport network, as a major component of end-to-end network slices. Furthermore, a number of isolation feasibility indicators are defined and proposed. These indicators are based on the approaches referred before, as a mean of guiding orchestration decisions at the time of provisioning or reconfiguring the transport slices in the network.
|
2112.03174
|
Raghav Rawat
|
Raghav Rawat, Shreyash Gupta, Shreyas Mohapatra, Sujata Priyambada
Mishra, Sreesankar Rajagopal
|
Intelligent Acoustic Module for Autonomous Vehicles using Fast Gated
Recurrent approach
|
6 pages, 8 figures
| null | null | null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper elucidates a model for acoustic single and multi-tone
classification in resource constrained edge devices. The proposed model is of
State-of-the-art Fast Accurate Stable Tiny Gated Recurrent Neural Network. This
model has resulted in improved performance metrics and lower size compared to
previous hypothesized methods by using lesser parameters with higher efficiency
and employment of a noise reduction algorithm. The model is implemented as an
acoustic AI module, focused for the application of sound identification,
localization, and deployment on AI systems like that of an autonomous car.
Further, the inclusion of localization techniques carries the potential of
adding a new dimension to the multi-tone classifiers present in autonomous
vehicles, as its demand increases in urban cities and developing countries in
the future.
|
[
{
"created": "Mon, 6 Dec 2021 17:06:48 GMT",
"version": "v1"
}
] |
2021-12-07
|
[
[
"Rawat",
"Raghav",
""
],
[
"Gupta",
"Shreyash",
""
],
[
"Mohapatra",
"Shreyas",
""
],
[
"Mishra",
"Sujata Priyambada",
""
],
[
"Rajagopal",
"Sreesankar",
""
]
] |
This paper elucidates a model for acoustic single and multi-tone classification in resource constrained edge devices. The proposed model is of State-of-the-art Fast Accurate Stable Tiny Gated Recurrent Neural Network. This model has resulted in improved performance metrics and lower size compared to previous hypothesized methods by using lesser parameters with higher efficiency and employment of a noise reduction algorithm. The model is implemented as an acoustic AI module, focused for the application of sound identification, localization, and deployment on AI systems like that of an autonomous car. Further, the inclusion of localization techniques carries the potential of adding a new dimension to the multi-tone classifiers present in autonomous vehicles, as its demand increases in urban cities and developing countries in the future.
|
2306.07730
|
Paul Streli
|
Valentin Bieri, Paul Streli, Berken Utku Demirel and Christian Holz
|
BeliefPPG: Uncertainty-aware Heart Rate Estimation from PPG signals via
Belief Propagation
|
Conference on Uncertainty in Artificial Intelligence (UAI) 2023. The
first two authors contributed equally
| null | null | null |
cs.LG cs.CV eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a novel learning-based method that achieves state-of-the-art
performance on several heart rate estimation benchmarks extracted from
photoplethysmography signals (PPG). We consider the evolution of the heart rate
in the context of a discrete-time stochastic process that we represent as a
hidden Markov model. We derive a distribution over possible heart rate values
for a given PPG signal window through a trained neural network. Using belief
propagation, we incorporate the statistical distribution of heart rate changes
to refine these estimates in a temporal context. From this, we obtain a
quantized probability distribution over the range of possible heart rate values
that captures a meaningful and well-calibrated estimate of the inherent
predictive uncertainty. We show the robustness of our method on eight public
datasets with three different cross-validation experiments.
|
[
{
"created": "Tue, 13 Jun 2023 12:36:00 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jun 2023 16:29:53 GMT",
"version": "v2"
}
] |
2023-06-16
|
[
[
"Bieri",
"Valentin",
""
],
[
"Streli",
"Paul",
""
],
[
"Demirel",
"Berken Utku",
""
],
[
"Holz",
"Christian",
""
]
] |
We present a novel learning-based method that achieves state-of-the-art performance on several heart rate estimation benchmarks extracted from photoplethysmography signals (PPG). We consider the evolution of the heart rate in the context of a discrete-time stochastic process that we represent as a hidden Markov model. We derive a distribution over possible heart rate values for a given PPG signal window through a trained neural network. Using belief propagation, we incorporate the statistical distribution of heart rate changes to refine these estimates in a temporal context. From this, we obtain a quantized probability distribution over the range of possible heart rate values that captures a meaningful and well-calibrated estimate of the inherent predictive uncertainty. We show the robustness of our method on eight public datasets with three different cross-validation experiments.
|
1709.00779
|
Yingzhe Li
|
Yingzhe Li, Francois Baccelli, Jeffrey G. Andrews, Jianzhong Charlie
Zhang
|
Directional Cell Search Delay Analysis for Cellular Networks with Static
Users
|
Submitted to IEEE Transactions on Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell search is the process for a user to detect its neighboring base stations
(BSs) and make a cell selection decision. Due to the importance of beamforming
gain in millimeter wave (mmWave) and massive MIMO cellular networks, the
directional cell search delay performance is investigated. A cellular network
with fixed BS and user locations is considered, so that strong temporal
correlations exist for the SINR experienced at each BS and user. For Poisson
cellular networks with Rayleigh fading channels, a closed-form expression for
the spatially averaged mean cell search delay of all users is derived. This
mean cell search delay for a noise-limited network (e.g., mmWave network) is
proved to be infinite whenever the non-line-of-sight (NLOS) path loss exponent
is larger than 2. For interference-limited networks, a phase transition for the
mean cell search delay is shown to exist in terms of the number of BS
antennas/beams $M$: the mean cell search delay is infinite when $M$ is smaller
than a threshold and finite otherwise. Beam-sweeping is also demonstrated to be
effective in decreasing the cell search delay, especially for the cell edge
users.
|
[
{
"created": "Mon, 4 Sep 2017 00:58:16 GMT",
"version": "v1"
}
] |
2017-09-05
|
[
[
"Li",
"Yingzhe",
""
],
[
"Baccelli",
"Francois",
""
],
[
"Andrews",
"Jeffrey G.",
""
],
[
"Zhang",
"Jianzhong Charlie",
""
]
] |
Cell search is the process for a user to detect its neighboring base stations (BSs) and make a cell selection decision. Due to the importance of beamforming gain in millimeter wave (mmWave) and massive MIMO cellular networks, the directional cell search delay performance is investigated. A cellular network with fixed BS and user locations is considered, so that strong temporal correlations exist for the SINR experienced at each BS and user. For Poisson cellular networks with Rayleigh fading channels, a closed-form expression for the spatially averaged mean cell search delay of all users is derived. This mean cell search delay for a noise-limited network (e.g., mmWave network) is proved to be infinite whenever the non-line-of-sight (NLOS) path loss exponent is larger than 2. For interference-limited networks, a phase transition for the mean cell search delay is shown to exist in terms of the number of BS antennas/beams $M$: the mean cell search delay is infinite when $M$ is smaller than a threshold and finite otherwise. Beam-sweeping is also demonstrated to be effective in decreasing the cell search delay, especially for the cell edge users.
|
1106.1311
|
Bart Demoen
|
Phuong-Lan Nguyen and Bart Demoen
|
Representation Sharing for Prolog
|
37 pages, 11 figures, 3 tables; To appear in Theory and Practice of
Logic Programming (TPLP)
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Representation sharing can reduce the memory footprint of a program by
sharing one representation between duplicate terms. The most common
implementation of representation sharing in functional programming systems is
known as hash-consing. In the context of Prolog, representation sharing has
been given little attention. Some current techniques that deal with
representation sharing are reviewed. The new contributions are: (1) an easy
implementation of {\em input sharing} for {\em findall/3}; (2) a description of
a {\em sharer} module that introduces representation sharing at runtime. Their
realization is shown in the context of the WAM as implemented by hProlog. Both
can be adapted to any WAM-like Prolog implementation. The sharer works
independently of the garbage collector, but it can be made to cooperate with
the garbage collector. Benchmark results show that the sharer has a cost
comparable to the heap garbage collector, that its effectiveness is highly
application dependent, and that its policy must be tuned to the collector. To
appear in Theory and Practice of Logic Programming (TPLP)
|
[
{
"created": "Tue, 7 Jun 2011 10:34:53 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jun 2011 07:09:53 GMT",
"version": "v2"
}
] |
2011-07-01
|
[
[
"Nguyen",
"Phuong-Lan",
""
],
[
"Demoen",
"Bart",
""
]
] |
Representation sharing can reduce the memory footprint of a program by sharing one representation between duplicate terms. The most common implementation of representation sharing in functional programming systems is known as hash-consing. In the context of Prolog, representation sharing has been given little attention. Some current techniques that deal with representation sharing are reviewed. The new contributions are: (1) an easy implementation of {\em input sharing} for {\em findall/3}; (2) a description of a {\em sharer} module that introduces representation sharing at runtime. Their realization is shown in the context of the WAM as implemented by hProlog. Both can be adapted to any WAM-like Prolog implementation. The sharer works independently of the garbage collector, but it can be made to cooperate with the garbage collector. Benchmark results show that the sharer has a cost comparable to the heap garbage collector, that its effectiveness is highly application dependent, and that its policy must be tuned to the collector. To appear in Theory and Practice of Logic Programming (TPLP)
|
2406.12058
|
Seyedali Mohammadi
|
Seyedali Mohammadi, Edward Raff, Jinendra Malekar, Vedant Palit,
Francis Ferraro, and Manas Gaur
|
WellDunn: On the Robustness and Explainability of Language Models and
Large Language Models in Identifying Wellness Dimensions
|
26 pages, including reference and appendix sections, 8 figures, and
16 tables
| null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language Models (LMs) are being proposed for mental health applications where
the heightened risk of adverse outcomes means predictive performance may not be
a sufficient litmus test of a model's utility in clinical practice. A model
that can be trusted for practice should have a correspondence between
explanation and clinical determination, yet no prior research has examined the
attention fidelity of these models and their effect on ground truth
explanations. We introduce an evaluation design that focuses on the robustness
and explainability of LMs in identifying Wellness Dimensions (WD). We focus on
two mental health and well-being datasets: (a) Multi-label Classification-based
MultiWD, and (b) WellXplain for evaluating attention mechanism veracity against
expert-labeled explanations. The labels are based on Halbert Dunn's theory of
wellness, which gives grounding to our evaluation. We reveal four surprising
results about LMs/LLMs: (1) Despite their human-like capabilities, GPT-3.5/4
lag behind RoBERTa, and MedAlpaca, a fine-tuned LLM fails to deliver any
remarkable improvements in performance or explanations. (2) Re-examining LMs'
predictions based on a confidence-oriented loss function reveals a significant
performance drop. (3) Across all LMs/LLMs, the alignment between attention and
explanations remains low, with LLMs scoring a dismal 0.0. (4) Most mental
health-specific LMs/LLMs overlook domain-specific knowledge and undervalue
explanations, causing these discrepancies. This study highlights the need for
further research into their consistency and explanations in mental health and
well-being.
|
[
{
"created": "Mon, 17 Jun 2024 19:50:40 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jun 2024 18:19:39 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jun 2024 04:08:12 GMT",
"version": "v3"
}
] |
2024-07-01
|
[
[
"Mohammadi",
"Seyedali",
""
],
[
"Raff",
"Edward",
""
],
[
"Malekar",
"Jinendra",
""
],
[
"Palit",
"Vedant",
""
],
[
"Ferraro",
"Francis",
""
],
[
"Gaur",
"Manas",
""
]
] |
Language Models (LMs) are being proposed for mental health applications where the heightened risk of adverse outcomes means predictive performance may not be a sufficient litmus test of a model's utility in clinical practice. A model that can be trusted for practice should have a correspondence between explanation and clinical determination, yet no prior research has examined the attention fidelity of these models and their effect on ground truth explanations. We introduce an evaluation design that focuses on the robustness and explainability of LMs in identifying Wellness Dimensions (WD). We focus on two mental health and well-being datasets: (a) Multi-label Classification-based MultiWD, and (b) WellXplain for evaluating attention mechanism veracity against expert-labeled explanations. The labels are based on Halbert Dunn's theory of wellness, which gives grounding to our evaluation. We reveal four surprising results about LMs/LLMs: (1) Despite their human-like capabilities, GPT-3.5/4 lag behind RoBERTa, and MedAlpaca, a fine-tuned LLM fails to deliver any remarkable improvements in performance or explanations. (2) Re-examining LMs' predictions based on a confidence-oriented loss function reveals a significant performance drop. (3) Across all LMs/LLMs, the alignment between attention and explanations remains low, with LLMs scoring a dismal 0.0. (4) Most mental health-specific LMs/LLMs overlook domain-specific knowledge and undervalue explanations, causing these discrepancies. This study highlights the need for further research into their consistency and explanations in mental health and well-being.
|
2006.15482
|
Rui Liu
|
Chao Huang and Rui Liu
|
Robot Inner Attention Modeling for Task-Adaptive Teaming of
Heterogeneous Multi Robots
|
submission for the Robotics Science and Systems (RSS) 2020 workshop
-- Heterogeneous Multi-Robot Task Allocation and Coordination
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attracted by team scale and function diversity, a heterogeneous multi-robot
system (HMRS), where multiple robots with different functions and numbers are
coordinated to perform tasks, has been widely used for complex and large-scale
scenarios, including disaster search and rescue, site surveillance, and social
security. However, due to the variety of the task requirements, it is
challenging to accurately compose a robot team with appropriate sizes and
functions to dynamically satisfy task needs while limiting the robot resource
cost to a low level. To solve this problem, in this paper, a novel adaptive
cooperation method, inner attention (innerATT), is developed to flexibly team
heterogeneous robots to execute tasks as task types and environment change.
innerATT is designed by integrating a novel attention mechanism into a
multi-agent actor-critic reinforcement learning architecture. With an attention
mechanism, robot capability will be analyzed to flexibly form teams to meet
task requirements. Scenarios with different task variety ("Single Task",
"Double Task", and "Mixed Task") were designed. The effectiveness of the
innerATT was validated by its accuracy in flexible cooperation.
|
[
{
"created": "Sun, 28 Jun 2020 01:25:35 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Mar 2021 20:19:14 GMT",
"version": "v2"
}
] |
2021-03-16
|
[
[
"Huang",
"Chao",
""
],
[
"Liu",
"Rui",
""
]
] |
Attracted by team scale and function diversity, a heterogeneous multi-robot system (HMRS), where multiple robots with different functions and numbers are coordinated to perform tasks, has been widely used for complex and large-scale scenarios, including disaster search and rescue, site surveillance, and social security. However, due to the variety of the task requirements, it is challenging to accurately compose a robot team with appropriate sizes and functions to dynamically satisfy task needs while limiting the robot resource cost to a low level. To solve this problem, in this paper, a novel adaptive cooperation method, inner attention (innerATT), is developed to flexibly team heterogeneous robots to execute tasks as task types and environment change. innerATT is designed by integrating a novel attention mechanism into a multi-agent actor-critic reinforcement learning architecture. With an attention mechanism, robot capability will be analyzed to flexibly form teams to meet task requirements. Scenarios with different task variety ("Single Task", "Double Task", and "Mixed Task") were designed. The effectiveness of the innerATT was validated by its accuracy in flexible cooperation.
|
1004.4759
|
Mikkel Baun Kj{\ae}rgaard
|
Mikkel Baun Kj{\ae}rgaard
|
Indoor Positioning with Radio Location Fingerprinting
|
PhD Dissertation, Aarhus University
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An increasingly important requirement for many novel applications is sensing
the positions of people, equipment, etc. GPS technology has proven itself as a
successfull technology for positioning in outdoor environments but indoor no
technology has yet gained a similar wide-scale adoption. A promising indoor
positioning technique is radio-based location fingerprinting, having the major
advantage of exploiting already existing radio infrastructures, like IEEE
802.11, which avoids extra deployment costs and effort. The research goal of
this thesis is to address the limitations of current indoor location
fingerprinting systems. In particular the aim is to advance location
fingerprinting techniques for the challenges of handling heterogeneous clients,
scalability to many clients, and interference between communication and
positioning. The wireless clients used for location fingerprinting are
heterogeneous even when only considering clients for the same technology.
Heterogeneity is a challenge for location fingerprinting because it severely
decreases the precision of location fingerprinting. To support many clients
location fingerprinting has to address how to scale estimate calculation,
measurement distribution, and distribution of position estimates. This is a
challenge because of the number of calculations involved and the frequency of
measurements and position updates. Positioning using location fingerprinting
requires the measurement of, for instance, signal strength for nearby base
stations. However, many wireless communication technologies block communication
while collecting such measurements. This interference is a challenge because it
is not desirable that positioning disables communication. An additional goal is
to improve the conceptual foundation of location fingerprinting. A better
foundation will aid researchers to better survey and design location
fingerprinting systems.
|
[
{
"created": "Tue, 27 Apr 2010 10:47:21 GMT",
"version": "v1"
}
] |
2010-04-28
|
[
[
"Kjærgaard",
"Mikkel Baun",
""
]
] |
An increasingly important requirement for many novel applications is sensing the positions of people, equipment, etc. GPS technology has proven itself as a successfull technology for positioning in outdoor environments but indoor no technology has yet gained a similar wide-scale adoption. A promising indoor positioning technique is radio-based location fingerprinting, having the major advantage of exploiting already existing radio infrastructures, like IEEE 802.11, which avoids extra deployment costs and effort. The research goal of this thesis is to address the limitations of current indoor location fingerprinting systems. In particular the aim is to advance location fingerprinting techniques for the challenges of handling heterogeneous clients, scalability to many clients, and interference between communication and positioning. The wireless clients used for location fingerprinting are heterogeneous even when only considering clients for the same technology. Heterogeneity is a challenge for location fingerprinting because it severely decreases the precision of location fingerprinting. To support many clients location fingerprinting has to address how to scale estimate calculation, measurement distribution, and distribution of position estimates. This is a challenge because of the number of calculations involved and the frequency of measurements and position updates. Positioning using location fingerprinting requires the measurement of, for instance, signal strength for nearby base stations. However, many wireless communication technologies block communication while collecting such measurements. This interference is a challenge because it is not desirable that positioning disables communication. An additional goal is to improve the conceptual foundation of location fingerprinting. A better foundation will aid researchers to better survey and design location fingerprinting systems.
|
2209.09178
|
Yunsheng Ma
|
Yunsheng Ma and Ziran Wang
|
ViT-DD: Multi-Task Vision Transformer for Semi-Supervised Driver
Distraction Detection
|
7 pages, 3 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ensuring traffic safety and mitigating accidents in modern driving is of
paramount importance, and computer vision technologies have the potential to
significantly contribute to this goal. This paper presents a multi-modal Vision
Transformer for Driver Distraction Detection (termed ViT-DD), which
incorporates inductive information from training signals related to both
distraction detection and driver emotion recognition. Additionally, a
self-learning algorithm is developed, allowing for the seamless integration of
driver data without emotion labels into the multi-task training process of
ViT-DD. Experimental results reveal that the proposed ViT-DD surpasses existing
state-of-the-art methods for driver distraction detection by 6.5% and 0.9% on
the SFDDD and AUCDD datasets, respectively.
|
[
{
"created": "Mon, 19 Sep 2022 16:56:51 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Sep 2022 16:16:13 GMT",
"version": "v2"
},
{
"created": "Sat, 13 May 2023 02:51:53 GMT",
"version": "v3"
},
{
"created": "Tue, 6 Feb 2024 17:48:10 GMT",
"version": "v4"
}
] |
2024-02-07
|
[
[
"Ma",
"Yunsheng",
""
],
[
"Wang",
"Ziran",
""
]
] |
Ensuring traffic safety and mitigating accidents in modern driving is of paramount importance, and computer vision technologies have the potential to significantly contribute to this goal. This paper presents a multi-modal Vision Transformer for Driver Distraction Detection (termed ViT-DD), which incorporates inductive information from training signals related to both distraction detection and driver emotion recognition. Additionally, a self-learning algorithm is developed, allowing for the seamless integration of driver data without emotion labels into the multi-task training process of ViT-DD. Experimental results reveal that the proposed ViT-DD surpasses existing state-of-the-art methods for driver distraction detection by 6.5% and 0.9% on the SFDDD and AUCDD datasets, respectively.
|
1811.12611
|
Binghui Chen
|
Binghui Chen, Weihong Deng, Haifeng Shen
|
Virtual Class Enhanced Discriminative Embedding Learning
|
NeurIPS 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, learning discriminative features to improve the recognition
performances gradually becomes the primary goal of deep learning, and numerous
remarkable works have emerged. In this paper, we propose a novel yet extremely
simple method \textbf{Virtual Softmax} to enhance the discriminative property
of learned features by injecting a dynamic virtual negative class into the
original softmax. Injecting virtual class aims to enlarge inter-class margin
and compress intra-class distribution by strengthening the decision boundary
constraint. Although it seems weird to optimize with this additional virtual
class, we show that our method derives from an intuitive and clear motivation,
and it indeed encourages the features to be more compact and separable. This
paper empirically and experimentally demonstrates the superiority of Virtual
Softmax, improving the performances on a variety of object classification and
face verification tasks.
|
[
{
"created": "Fri, 30 Nov 2018 04:43:20 GMT",
"version": "v1"
}
] |
2018-12-03
|
[
[
"Chen",
"Binghui",
""
],
[
"Deng",
"Weihong",
""
],
[
"Shen",
"Haifeng",
""
]
] |
Recently, learning discriminative features to improve the recognition performances gradually becomes the primary goal of deep learning, and numerous remarkable works have emerged. In this paper, we propose a novel yet extremely simple method \textbf{Virtual Softmax} to enhance the discriminative property of learned features by injecting a dynamic virtual negative class into the original softmax. Injecting virtual class aims to enlarge inter-class margin and compress intra-class distribution by strengthening the decision boundary constraint. Although it seems weird to optimize with this additional virtual class, we show that our method derives from an intuitive and clear motivation, and it indeed encourages the features to be more compact and separable. This paper empirically and experimentally demonstrates the superiority of Virtual Softmax, improving the performances on a variety of object classification and face verification tasks.
|
1909.00324
|
Yunlong Liang
|
Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen and
Jie Zhou
|
A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment
Analysis
|
Accepted at EMNLP 2019 as a long paper. Code is available at
https://github.com/XL2248/AGDT
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect based sentiment analysis (ABSA) aims to identify the sentiment
polarity towards the given aspect in a sentence, while previous models
typically exploit an aspect-independent (weakly associative) encoder for
sentence representation generation. In this paper, we propose a novel
Aspect-Guided Deep Transition model, named AGDT, which utilizes the given
aspect to guide the sentence encoding from scratch with the specially-designed
deep transition architecture. Furthermore, an aspect-oriented objective is
designed to enforce AGDT to reconstruct the given aspect with the generated
sentence representation. In doing so, our AGDT can accurately generate
aspect-specific sentence representation, and thus conduct more accurate
sentiment predictions. Experimental results on multiple SemEval datasets
demonstrate the effectiveness of our proposed approach, which significantly
outperforms the best reported results with the same setting.
|
[
{
"created": "Sun, 1 Sep 2019 05:22:30 GMT",
"version": "v1"
}
] |
2019-09-04
|
[
[
"Liang",
"Yunlong",
""
],
[
"Meng",
"Fandong",
""
],
[
"Zhang",
"Jinchao",
""
],
[
"Xu",
"Jinan",
""
],
[
"Chen",
"Yufeng",
""
],
[
"Zhou",
"Jie",
""
]
] |
Aspect based sentiment analysis (ABSA) aims to identify the sentiment polarity towards the given aspect in a sentence, while previous models typically exploit an aspect-independent (weakly associative) encoder for sentence representation generation. In this paper, we propose a novel Aspect-Guided Deep Transition model, named AGDT, which utilizes the given aspect to guide the sentence encoding from scratch with the specially-designed deep transition architecture. Furthermore, an aspect-oriented objective is designed to enforce AGDT to reconstruct the given aspect with the generated sentence representation. In doing so, our AGDT can accurately generate aspect-specific sentence representation, and thus conduct more accurate sentiment predictions. Experimental results on multiple SemEval datasets demonstrate the effectiveness of our proposed approach, which significantly outperforms the best reported results with the same setting.
|
2402.08576
|
Keegan Harris
|
Keegan Harris, Zhiwei Steven Wu, Maria-Florina Balcan
|
Regret Minimization in Stackelberg Games with Side Information
| null | null | null | null |
cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algorithms for playing in Stackelberg games have been deployed in real-world
domains including airport security, anti-poaching efforts, and cyber-crime
prevention. However, these algorithms often fail to take into consideration the
additional information available to each player (e.g. traffic patterns, weather
conditions, network congestion), a salient feature of reality which may
significantly affect both players' optimal strategies. We formalize such
settings as Stackelberg games with side information, in which both players
observe an external context before playing. The leader commits to a
(context-dependent) strategy, and the follower best-responds to both the
leader's strategy and the context. We focus on the online setting in which a
sequence of followers arrive over time, and the context may change from
round-to-round. In sharp contrast to the non-contextual version, we show that
it is impossible for the leader to achieve good performance (measured by
regret) in the full adversarial setting. Motivated by our impossibility result,
we show that no-regret learning is possible in two natural relaxations: the
setting in which the sequence of followers is chosen stochastically and the
sequence of contexts is adversarial, and the setting in which the sequence of
contexts is stochastic and the sequence of followers is chosen by an adversary.
|
[
{
"created": "Tue, 13 Feb 2024 16:24:57 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 19:20:51 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2024 14:39:31 GMT",
"version": "v3"
}
] |
2024-05-24
|
[
[
"Harris",
"Keegan",
""
],
[
"Wu",
"Zhiwei Steven",
""
],
[
"Balcan",
"Maria-Florina",
""
]
] |
Algorithms for playing in Stackelberg games have been deployed in real-world domains including airport security, anti-poaching efforts, and cyber-crime prevention. However, these algorithms often fail to take into consideration the additional information available to each player (e.g. traffic patterns, weather conditions, network congestion), a salient feature of reality which may significantly affect both players' optimal strategies. We formalize such settings as Stackelberg games with side information, in which both players observe an external context before playing. The leader commits to a (context-dependent) strategy, and the follower best-responds to both the leader's strategy and the context. We focus on the online setting in which a sequence of followers arrive over time, and the context may change from round-to-round. In sharp contrast to the non-contextual version, we show that it is impossible for the leader to achieve good performance (measured by regret) in the full adversarial setting. Motivated by our impossibility result, we show that no-regret learning is possible in two natural relaxations: the setting in which the sequence of followers is chosen stochastically and the sequence of contexts is adversarial, and the setting in which the sequence of contexts is stochastic and the sequence of followers is chosen by an adversary.
|
2401.01987
|
Leon Scharw\"achter
|
Leon Scharw\"achter and Sebastian Otte
|
Representation Learning of Multivariate Time Series using Attention and
Adversarial Training
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A critical factor in trustworthy machine learning is to develop robust
representations of the training data. Only under this guarantee methods are
legitimate to artificially generate data, for example, to counteract imbalanced
datasets or provide counterfactual explanations for blackbox decision-making
systems. In recent years, Generative Adversarial Networks (GANs) have shown
considerable results in forming stable representations and generating realistic
data. While many applications focus on generating image data, less effort has
been made in generating time series data, especially multivariate signals. In
this work, a Transformer-based autoencoder is proposed that is regularized
using an adversarial training scheme to generate artificial multivariate time
series signals. The representation is evaluated using t-SNE visualizations,
Dynamic Time Warping (DTW) and Entropy scores. Our results indicate that the
generated signals exhibit higher similarity to an exemplary dataset than using
a convolutional network approach.
|
[
{
"created": "Wed, 3 Jan 2024 21:32:46 GMT",
"version": "v1"
}
] |
2024-01-05
|
[
[
"Scharwächter",
"Leon",
""
],
[
"Otte",
"Sebastian",
""
]
] |
A critical factor in trustworthy machine learning is to develop robust representations of the training data. Only under this guarantee methods are legitimate to artificially generate data, for example, to counteract imbalanced datasets or provide counterfactual explanations for blackbox decision-making systems. In recent years, Generative Adversarial Networks (GANs) have shown considerable results in forming stable representations and generating realistic data. While many applications focus on generating image data, less effort has been made in generating time series data, especially multivariate signals. In this work, a Transformer-based autoencoder is proposed that is regularized using an adversarial training scheme to generate artificial multivariate time series signals. The representation is evaluated using t-SNE visualizations, Dynamic Time Warping (DTW) and Entropy scores. Our results indicate that the generated signals exhibit higher similarity to an exemplary dataset than using a convolutional network approach.
|
1812.10891
|
EPTCS
|
Kenichi Asai (Ochanomizu University), Mark Shinwell (Jane Street
Europe)
|
Proceedings ML Family Workshop / OCaml Users and Developers workshops
| null |
EPTCS 285, 2018
|
10.4204/EPTCS.285
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains the joint post-proceedings of the 2016 edition of the ML
Family Workshop and OCaml Users and Developers Workshop, held in Nara, Japan,
in affiliation with ICFP 2016.
|
[
{
"created": "Fri, 28 Dec 2018 04:40:03 GMT",
"version": "v1"
}
] |
2018-12-31
|
[
[
"Asai",
"Kenichi",
"",
"Ochanomizu University"
],
[
"Shinwell",
"Mark",
"",
"Jane Street\n Europe"
]
] |
This volume contains the joint post-proceedings of the 2016 edition of the ML Family Workshop and OCaml Users and Developers Workshop, held in Nara, Japan, in affiliation with ICFP 2016.
|
2110.08257
|
Leman Akoglu
|
Guilherme D. F. Silva, Leman Akoglu, Robson L. F. Cordeiro
|
C-AllOut: Catching & Calling Outliers by Type
|
9+4 pages, 3 figures, 11 tables
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Given an unlabeled dataset, wherein we have access only to pairwise
similarities (or distances), how can we effectively (1) detect outliers, and
(2) annotate/tag the outliers by type? Outlier detection has a large
literature, yet we find a key gap in the field: to our knowledge, no existing
work addresses the outlier annotation problem. Outliers are broadly classified
into 3 types, representing distinct patterns that could be valuable to
analysts: (a) global outliers are severe yet isolate cases that do not repeat,
e.g., a data collection error; (b) local outliers diverge from their peers
within a context, e.g., a particularly short basketball player; and (c)
collective outliers are isolated micro-clusters that may indicate coalition or
repetitions, e.g., frauds that exploit the same loophole. This paper presents
C-AllOut: a novel and effective outlier detector that annotates outliers by
type. It is parameter-free and scalable, besides working only with pairwise
similarities (or distances) when it is needed. We show that C-AllOut achieves
on par or significantly better performance than state-of-the-art detectors when
spotting outliers regardless of their type. It is also highly effective in
annotating outliers of particular types, a task that none of the baselines can
perform.
|
[
{
"created": "Wed, 13 Oct 2021 14:25:52 GMT",
"version": "v1"
}
] |
2021-10-19
|
[
[
"Silva",
"Guilherme D. F.",
""
],
[
"Akoglu",
"Leman",
""
],
[
"Cordeiro",
"Robson L. F.",
""
]
] |
Given an unlabeled dataset, wherein we have access only to pairwise similarities (or distances), how can we effectively (1) detect outliers, and (2) annotate/tag the outliers by type? Outlier detection has a large literature, yet we find a key gap in the field: to our knowledge, no existing work addresses the outlier annotation problem. Outliers are broadly classified into 3 types, representing distinct patterns that could be valuable to analysts: (a) global outliers are severe yet isolate cases that do not repeat, e.g., a data collection error; (b) local outliers diverge from their peers within a context, e.g., a particularly short basketball player; and (c) collective outliers are isolated micro-clusters that may indicate coalition or repetitions, e.g., frauds that exploit the same loophole. This paper presents C-AllOut: a novel and effective outlier detector that annotates outliers by type. It is parameter-free and scalable, besides working only with pairwise similarities (or distances) when it is needed. We show that C-AllOut achieves on par or significantly better performance than state-of-the-art detectors when spotting outliers regardless of their type. It is also highly effective in annotating outliers of particular types, a task that none of the baselines can perform.
|
1909.03315
|
Martin Andrews
|
Martin Andrews, Sam Witteveen
|
Relationships from Entity Stream
|
Accepted paper for the ViGIL workshop at NIPS 2017. (4 pages +
references)
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relational reasoning is a central component of intelligent behavior, but has
proven difficult for neural networks to learn. The Relation Network (RN) module
was recently proposed by DeepMind to solve such problems, and demonstrated
state-of-the-art results on a number of datasets. However, the RN module scales
quadratically in the size of the input, since it calculates relationship
factors between every patch in the visual field, including those that do not
correspond to entities. In this paper, we describe an architecture that enables
relationships to be determined from a stream of entities obtained by an
attention mechanism over the input field. The model is trained end-to-end, and
demonstrates equivalent performance with greater interpretability while
requiring only a fraction of the model parameters of the original RN module.
|
[
{
"created": "Sat, 7 Sep 2019 18:24:57 GMT",
"version": "v1"
}
] |
2019-09-10
|
[
[
"Andrews",
"Martin",
""
],
[
"Witteveen",
"Sam",
""
]
] |
Relational reasoning is a central component of intelligent behavior, but has proven difficult for neural networks to learn. The Relation Network (RN) module was recently proposed by DeepMind to solve such problems, and demonstrated state-of-the-art results on a number of datasets. However, the RN module scales quadratically in the size of the input, since it calculates relationship factors between every patch in the visual field, including those that do not correspond to entities. In this paper, we describe an architecture that enables relationships to be determined from a stream of entities obtained by an attention mechanism over the input field. The model is trained end-to-end, and demonstrates equivalent performance with greater interpretability while requiring only a fraction of the model parameters of the original RN module.
|
1709.05958
|
Matthew Peveler
|
Matthew Peveler, Naveen Sundar Govindarajulu, Selmer Bringsjord,
Atriya Sen, Biplav Srivastava, Kartik Talamadupula, Hui Su
|
Toward Cognitive and Immersive Systems: Experiments in a Cognitive
Microworld
|
Submitted to Advances of Cognitive Systems 2018
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As computational power has continued to increase, and sensors have become
more accurate, the corresponding advent of systems that are at once cognitive
and immersive has arrived. These \textit{cognitive and immersive systems}
(CAISs) fall squarely into the intersection of AI with HCI/HRI: such systems
interact with and assist the human agents that enter them, in no small part
because such systems are infused with AI able to understand and reason about
these humans and their knowledge, beliefs, goals, communications, plans, etc.
We herein explain our approach to engineering CAISs. We emphasize the capacity
of a CAIS to develop and reason over a `theory of the mind' of its human
partners. This capacity entails that the AI in question has a sophisticated
model of the beliefs, knowledge, goals, desires, emotions, etc.\ of these
humans. To accomplish this engineering, a formal framework of very high
expressivity is needed. In our case, this framework is a \textit{cognitive
event calculus}, a particular kind of quantified multi-operator modal logic,
and a matching high-expressivity automated reasoner and planner. To explain,
advance, and to a degree validate our approach, we show that a calculus of this
type satisfies a set of formal requirements, and can enable a CAIS to
understand a psychologically tricky scenario couched in what we call the
\textit{cognitive polysolid framework} (CPF). We also formally show that a room
that satisfies these requirements can have a useful property we term
\emph{expectation of usefulness}. CPF, a sub-class of \textit{cognitive
microworlds}, includes machinery able to represent and plan over not merely
blocks and actions (such as seen in the primitive `blocks worlds' of old), but
also over agents and their mental attitudes about both other agents and
inanimate objects.
|
[
{
"created": "Thu, 14 Sep 2017 21:52:54 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2018 17:26:47 GMT",
"version": "v2"
}
] |
2024-06-10
|
[
[
"Peveler",
"Matthew",
""
],
[
"Govindarajulu",
"Naveen Sundar",
""
],
[
"Bringsjord",
"Selmer",
""
],
[
"Sen",
"Atriya",
""
],
[
"Srivastava",
"Biplav",
""
],
[
"Talamadupula",
"Kartik",
""
],
[
"Su",
"Hui",
""
]
] |
As computational power has continued to increase, and sensors have become more accurate, the corresponding advent of systems that are at once cognitive and immersive has arrived. These \textit{cognitive and immersive systems} (CAISs) fall squarely into the intersection of AI with HCI/HRI: such systems interact with and assist the human agents that enter them, in no small part because such systems are infused with AI able to understand and reason about these humans and their knowledge, beliefs, goals, communications, plans, etc. We herein explain our approach to engineering CAISs. We emphasize the capacity of a CAIS to develop and reason over a `theory of the mind' of its human partners. This capacity entails that the AI in question has a sophisticated model of the beliefs, knowledge, goals, desires, emotions, etc.\ of these humans. To accomplish this engineering, a formal framework of very high expressivity is needed. In our case, this framework is a \textit{cognitive event calculus}, a particular kind of quantified multi-operator modal logic, and a matching high-expressivity automated reasoner and planner. To explain, advance, and to a degree validate our approach, we show that a calculus of this type satisfies a set of formal requirements, and can enable a CAIS to understand a psychologically tricky scenario couched in what we call the \textit{cognitive polysolid framework} (CPF). We also formally show that a room that satisfies these requirements can have a useful property we term \emph{expectation of usefulness}. CPF, a sub-class of \textit{cognitive microworlds}, includes machinery able to represent and plan over not merely blocks and actions (such as seen in the primitive `blocks worlds' of old), but also over agents and their mental attitudes about both other agents and inanimate objects.
|
0710.4832
|
EDA Publishing Association
|
Massimo Conti
|
SystemC Analysis of a New Dynamic Power Management Architecture
|
Submitted on behalf of EDAA (http://www.edaa.com/)
|
Dans Design, Automation and Test in Europe | Designers'Forum -
DATE'05, Munich : Allemagne (2005)
| null | null |
cs.AR
| null |
This paper presents a new dynamic power management architecture of a System
on Chip. The Power State Machine describing the status of the core follows the
recommendations of the ACPI standard. The algorithm controls the power states
of each block on the basis of battery status, chip temperature and a user
defined task priority.
|
[
{
"created": "Thu, 25 Oct 2007 12:10:19 GMT",
"version": "v1"
}
] |
2011-11-09
|
[
[
"Conti",
"Massimo",
""
]
] |
This paper presents a new dynamic power management architecture of a System on Chip. The Power State Machine describing the status of the core follows the recommendations of the ACPI standard. The algorithm controls the power states of each block on the basis of battery status, chip temperature and a user defined task priority.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.