id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2003.05266
|
Adrian Brandemuehl
|
Leiv Andresen, Adrian Brandemuehl, Alex H\"onger, Benson Kuan, Niclas
V\"odisch, Hermann Blum, Victor Reijgwart, Lukas Bernreiter, Lukas Schaupp,
Jen Jen Chung, Mathias B\"urki, Martin R. Oswald, Roland Siegwart and Abel
Gawel
|
Accurate Mapping and Planning for Autonomous Racing
| null |
2020 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2020, pp. 4743-4749
|
10.1109/IROS45743.2020.9341702
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the perception, mapping, and planning pipeline
implemented on an autonomous race car. It was developed by the 2019 AMZ
driverless team for the Formula Student Germany (FSG) 2019 driverless
competition, where it won 1st place overall. The presented solution combines
early fusion of camera and LiDAR data, a layered mapping approach, and a
planning approach that uses Bayesian filtering to achieve high-speed driving on
unknown race tracks while creating accurate maps. We benchmark the method
against our team's previous solution, which won FSG 2018, and show improved
accuracy when driving at the same speeds. Furthermore, the new pipeline makes
it possible to reliably raise the maximum driving speed in unknown environments
from 3~m/s to 12~m/s while still mapping with an acceptable RMSE of 0.29~m.
|
[
{
"created": "Wed, 11 Mar 2020 13:08:21 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2020 13:32:45 GMT",
"version": "v2"
},
{
"created": "Sat, 1 Aug 2020 15:34:29 GMT",
"version": "v3"
},
{
"created": "Thu, 17 Sep 2020 20:05:34 GMT",
"version": "v4"
}
] |
2021-03-02
|
[
[
"Andresen",
"Leiv",
""
],
[
"Brandemuehl",
"Adrian",
""
],
[
"Hönger",
"Alex",
""
],
[
"Kuan",
"Benson",
""
],
[
"Vödisch",
"Niclas",
""
],
[
"Blum",
"Hermann",
""
],
[
"Reijgwart",
"Victor",
""
],
[
"Bernreiter",
"Lukas",
""
],
[
"Schaupp",
"Lukas",
""
],
[
"Chung",
"Jen Jen",
""
],
[
"Bürki",
"Mathias",
""
],
[
"Oswald",
"Martin R.",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Gawel",
"Abel",
""
]
] |
This paper presents the perception, mapping, and planning pipeline implemented on an autonomous race car. It was developed by the 2019 AMZ driverless team for the Formula Student Germany (FSG) 2019 driverless competition, where it won 1st place overall. The presented solution combines early fusion of camera and LiDAR data, a layered mapping approach, and a planning approach that uses Bayesian filtering to achieve high-speed driving on unknown race tracks while creating accurate maps. We benchmark the method against our team's previous solution, which won FSG 2018, and show improved accuracy when driving at the same speeds. Furthermore, the new pipeline makes it possible to reliably raise the maximum driving speed in unknown environments from 3~m/s to 12~m/s while still mapping with an acceptable RMSE of 0.29~m.
|
2301.01913
|
Tom Marty
|
Tom Marty, Tristan Fran\c{c}ois, Pierre Tessier, Louis Gauthier,
Louis-Martin Rousseau, Quentin Cappart
|
Learning a Generic Value-Selection Heuristic Inside a Constraint
Programming Solver
|
15 pages
|
Constraint Programming 29 (2023) 25:1--25:19
|
10.4230/LIPIcs.CP.2023.25
| null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constraint programming is known for being an efficient approach for solving
combinatorial problems. Important design choices in a solver are the branching
heuristics, which are designed to lead the search to the best solutions in a
minimum amount of time. However, developing these heuristics is a
time-consuming process that requires problem-specific expertise. This
observation has motivated many efforts to use machine learning to automatically
learn efficient heuristics without expert intervention. To the best of our
knowledge, it is still an open research question. Although several generic
variable-selection heuristics are available in the literature, the options for
a generic value-selection heuristic are more scarce. In this paper, we propose
to tackle this issue by introducing a generic learning procedure that can be
used to obtain a value-selection heuristic inside a constraint programming
solver. This has been achieved thanks to the combination of a deep Q-learning
algorithm, a tailored reward signal, and a heterogeneous graph neural network
architecture. Experiments on graph coloring, maximum independent set, and
maximum cut problems show that our framework is able to find better solutions
close to optimality without requiring a large amounts of backtracks while being
generic.
|
[
{
"created": "Thu, 5 Jan 2023 05:13:48 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jul 2023 22:15:08 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Oct 2023 16:59:40 GMT",
"version": "v3"
}
] |
2024-04-17
|
[
[
"Marty",
"Tom",
""
],
[
"François",
"Tristan",
""
],
[
"Tessier",
"Pierre",
""
],
[
"Gauthier",
"Louis",
""
],
[
"Rousseau",
"Louis-Martin",
""
],
[
"Cappart",
"Quentin",
""
]
] |
Constraint programming is known for being an efficient approach for solving combinatorial problems. Important design choices in a solver are the branching heuristics, which are designed to lead the search to the best solutions in a minimum amount of time. However, developing these heuristics is a time-consuming process that requires problem-specific expertise. This observation has motivated many efforts to use machine learning to automatically learn efficient heuristics without expert intervention. To the best of our knowledge, it is still an open research question. Although several generic variable-selection heuristics are available in the literature, the options for a generic value-selection heuristic are more scarce. In this paper, we propose to tackle this issue by introducing a generic learning procedure that can be used to obtain a value-selection heuristic inside a constraint programming solver. This has been achieved thanks to the combination of a deep Q-learning algorithm, a tailored reward signal, and a heterogeneous graph neural network architecture. Experiments on graph coloring, maximum independent set, and maximum cut problems show that our framework is able to find better solutions close to optimality without requiring a large amounts of backtracks while being generic.
|
2212.04972
|
Jialiang Lin
|
Jialiang Lin, Jiaxin Song, Zhangping Zhou, Yidong Chen, Xiaodong Shi
|
MOPRD: A multidisciplinary open peer review dataset
|
Please cite the version of Neural Computing and Applications
|
Neural Computing and Applications, Vol. 35, Issue 34, pp.
24191-24206 (2023)
|
10.1007/s00521-023-08891-5
| null |
cs.DL cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open peer review is a growing trend in academic publications. Public access
to peer review data can benefit both the academic and publishing communities.
It also serves as a great support to studies on review comment generation and
further to the realization of automated scholarly paper review. However, most
of the existing peer review datasets do not provide data that cover the whole
peer review process. Apart from this, their data are not diversified enough as
the data are mainly collected from the field of computer science. These two
drawbacks of the currently available peer review datasets need to be addressed
to unlock more opportunities for related studies. In response, we construct
MOPRD, a multidisciplinary open peer review dataset. This dataset consists of
paper metadata, multiple version manuscripts, review comments, meta-reviews,
author's rebuttal letters, and editorial decisions. Moreover, we propose a
modular guided review comment generation method based on MOPRD. Experiments
show that our method delivers better performance as indicated by both automatic
metrics and human evaluation. We also explore other potential applications of
MOPRD, including meta-review generation, editorial decision prediction, author
rebuttal generation, and scientometric analysis. MOPRD is a strong endorsement
for further studies in peer review-related research and other applications.
|
[
{
"created": "Fri, 9 Dec 2022 16:35:14 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Nov 2023 18:06:48 GMT",
"version": "v2"
}
] |
2023-11-15
|
[
[
"Lin",
"Jialiang",
""
],
[
"Song",
"Jiaxin",
""
],
[
"Zhou",
"Zhangping",
""
],
[
"Chen",
"Yidong",
""
],
[
"Shi",
"Xiaodong",
""
]
] |
Open peer review is a growing trend in academic publications. Public access to peer review data can benefit both the academic and publishing communities. It also serves as a great support to studies on review comment generation and further to the realization of automated scholarly paper review. However, most of the existing peer review datasets do not provide data that cover the whole peer review process. Apart from this, their data are not diversified enough as the data are mainly collected from the field of computer science. These two drawbacks of the currently available peer review datasets need to be addressed to unlock more opportunities for related studies. In response, we construct MOPRD, a multidisciplinary open peer review dataset. This dataset consists of paper metadata, multiple version manuscripts, review comments, meta-reviews, author's rebuttal letters, and editorial decisions. Moreover, we propose a modular guided review comment generation method based on MOPRD. Experiments show that our method delivers better performance as indicated by both automatic metrics and human evaluation. We also explore other potential applications of MOPRD, including meta-review generation, editorial decision prediction, author rebuttal generation, and scientometric analysis. MOPRD is a strong endorsement for further studies in peer review-related research and other applications.
|
2306.13381
|
Rahul Nair
|
Rahul Nair
|
Co-creating a globally interpretable model with human input
|
Paper at Artificial Intelligence & Human-Computer Interaction
Workshop at ICML 2023
| null | null | null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We consider an aggregated human-AI collaboration aimed at generating a joint
interpretable model. The model takes the form of Boolean decision rules, where
human input is provided in the form of logical conditions or as partial
templates. This focus on the combined construction of a model offers a
different perspective on joint decision making. Previous efforts have typically
focused on aggregating outcomes rather than decisions logic. We demonstrate the
proposed approach through two examples and highlight the usefulness and
challenges of the approach.
|
[
{
"created": "Fri, 23 Jun 2023 09:03:16 GMT",
"version": "v1"
}
] |
2023-06-26
|
[
[
"Nair",
"Rahul",
""
]
] |
We consider an aggregated human-AI collaboration aimed at generating a joint interpretable model. The model takes the form of Boolean decision rules, where human input is provided in the form of logical conditions or as partial templates. This focus on the combined construction of a model offers a different perspective on joint decision making. Previous efforts have typically focused on aggregating outcomes rather than decisions logic. We demonstrate the proposed approach through two examples and highlight the usefulness and challenges of the approach.
|
1204.4467
|
I-Hong Hou
|
I-Hong Hou and Rahul Singh
|
Real-Time Stochastic Processing Networks with Concurrent Resource
Requirements
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic Processing Networks (SPNs) can be used to model communication
networks, manufacturing systems, service systems, etc. We consider a real-time
SPN where tasks generate jobs with strict deadlines according to their traffic
patterns. Each job requires the concurrent usage of some resources to be
processed. The processing time of a job may be stochastic, and may not be known
until the job completes. Finally, each task may require that some portion of
its tasks to be completed on time.
In this paper, we study the problem of verifying whether it is feasible to
fulfill the requirements of tasks, and of designing scheduling policies that
actually fulfill the requirements. We first address these problems for systems
where there is only one resource. Such systems are analog to ones studied in a
previous work, and, similar to the previous work, we can develop sharp
conditions for feasibility and scheduling policy that is feasibility-optimal.
We then study systems with two resources where there are jobs that require both
resources to be processed. We show that there is a reduction method that turns
systems with two resources into equivalent single-resource systems. Based on
this method, we can also derive sharp feasibility conditions and
feasibility-optimal scheduling policies for systems with two resources.
|
[
{
"created": "Thu, 19 Apr 2012 20:34:57 GMT",
"version": "v1"
}
] |
2012-04-23
|
[
[
"Hou",
"I-Hong",
""
],
[
"Singh",
"Rahul",
""
]
] |
Stochastic Processing Networks (SPNs) can be used to model communication networks, manufacturing systems, service systems, etc. We consider a real-time SPN where tasks generate jobs with strict deadlines according to their traffic patterns. Each job requires the concurrent usage of some resources to be processed. The processing time of a job may be stochastic, and may not be known until the job completes. Finally, each task may require that some portion of its tasks to be completed on time. In this paper, we study the problem of verifying whether it is feasible to fulfill the requirements of tasks, and of designing scheduling policies that actually fulfill the requirements. We first address these problems for systems where there is only one resource. Such systems are analog to ones studied in a previous work, and, similar to the previous work, we can develop sharp conditions for feasibility and scheduling policy that is feasibility-optimal. We then study systems with two resources where there are jobs that require both resources to be processed. We show that there is a reduction method that turns systems with two resources into equivalent single-resource systems. Based on this method, we can also derive sharp feasibility conditions and feasibility-optimal scheduling policies for systems with two resources.
|
1908.04133
|
Matthias Niedermaier
|
Matthias Niedermaier and Dominik Merli and Georg Sigl
|
A Secure Dual-MCU Architecture for Robust Communication of IIoT Devices
| null |
2019 8th Mediterranean Conference on Embedded Computing (MECO)
|
10.1109/MECO.2019.8760188
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Industrial Internet of Things (IIoT) has already become a part of our
everyday life be it water supply, smart grid, or production, IIoT is
everywhere. For example, factory operators want to know the current state of
the production line. These new demands for data acquisition in modern plants
require industrial components to be able to communicate. Nowadays, network
communication in Industrial Control Systems (ICSs) is often implemented via an
IP-based protocol. This intercommunication also brings a larger attack surface
for hackers. If an IIoT device is influenced by attackers, the physical process
could be affected. For example, a high network load could cause a high Central
Processing Unit (CPU) load and influence the reaction time on the physical
control side. In this paper, we introduce a dual Microcontroller Unit (MCU)
setup to ensure a resilient controlling for IIoT devices like Programmable
Logic Controllers (PLCs). We introduce a possible solution for the demand of
secure architectures in the IIoT. Moreover, we provide a Proof of Concept (PoC)
implementation with a benchmark and a comparison with a standard PLC.
|
[
{
"created": "Mon, 12 Aug 2019 12:58:53 GMT",
"version": "v1"
}
] |
2019-08-13
|
[
[
"Niedermaier",
"Matthias",
""
],
[
"Merli",
"Dominik",
""
],
[
"Sigl",
"Georg",
""
]
] |
The Industrial Internet of Things (IIoT) has already become a part of our everyday life be it water supply, smart grid, or production, IIoT is everywhere. For example, factory operators want to know the current state of the production line. These new demands for data acquisition in modern plants require industrial components to be able to communicate. Nowadays, network communication in Industrial Control Systems (ICSs) is often implemented via an IP-based protocol. This intercommunication also brings a larger attack surface for hackers. If an IIoT device is influenced by attackers, the physical process could be affected. For example, a high network load could cause a high Central Processing Unit (CPU) load and influence the reaction time on the physical control side. In this paper, we introduce a dual Microcontroller Unit (MCU) setup to ensure a resilient controlling for IIoT devices like Programmable Logic Controllers (PLCs). We introduce a possible solution for the demand of secure architectures in the IIoT. Moreover, we provide a Proof of Concept (PoC) implementation with a benchmark and a comparison with a standard PLC.
|
2002.12324
|
Eric Brachmann
|
Eric Brachmann and Carsten Rother
|
Visual Camera Re-Localization from RGB and RGB-D Images Using DSAC
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a learning-based system that estimates the camera position and
orientation from a single input image relative to a known environment. The
system is flexible w.r.t. the amount of information available at test and at
training time, catering to different applications. Input images can be RGB-D or
RGB, and a 3D model of the environment can be utilized for training but is not
necessary. In the minimal case, our system requires only RGB images and ground
truth poses at training time, and it requires only a single RGB image at test
time. The framework consists of a deep neural network and fully differentiable
pose optimization. The neural network predicts so called scene coordinates,
i.e. dense correspondences between the input image and 3D scene space of the
environment. The pose optimization implements robust fitting of pose parameters
using differentiable RANSAC (DSAC) to facilitate end-to-end training. The
system, an extension of DSAC++ and referred to as DSAC*, achieves
state-of-the-art accuracy an various public datasets for RGB-based
re-localization, and competitive accuracy for RGB-D-based re-localization.
|
[
{
"created": "Thu, 27 Feb 2020 18:45:21 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Aug 2020 12:07:47 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Aug 2020 12:29:26 GMT",
"version": "v3"
},
{
"created": "Fri, 9 Oct 2020 15:03:02 GMT",
"version": "v4"
}
] |
2020-10-12
|
[
[
"Brachmann",
"Eric",
""
],
[
"Rother",
"Carsten",
""
]
] |
We describe a learning-based system that estimates the camera position and orientation from a single input image relative to a known environment. The system is flexible w.r.t. the amount of information available at test and at training time, catering to different applications. Input images can be RGB-D or RGB, and a 3D model of the environment can be utilized for training but is not necessary. In the minimal case, our system requires only RGB images and ground truth poses at training time, and it requires only a single RGB image at test time. The framework consists of a deep neural network and fully differentiable pose optimization. The neural network predicts so called scene coordinates, i.e. dense correspondences between the input image and 3D scene space of the environment. The pose optimization implements robust fitting of pose parameters using differentiable RANSAC (DSAC) to facilitate end-to-end training. The system, an extension of DSAC++ and referred to as DSAC*, achieves state-of-the-art accuracy an various public datasets for RGB-based re-localization, and competitive accuracy for RGB-D-based re-localization.
|
2310.17417
|
John Liu
|
Hing Lie, Kachina Studer, Zhen Zhao, Ben Thomson, Dishita G Turakhia,
John Liu
|
Training for Open-Ended Drilling through a Virtual Reality Simulation
|
10 pages, 4 figures, 9 tables
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual Reality (VR) can support effective and scalable training of
psychomotor skills in manufacturing. However, many industry training modules
offer experiences that are close-ended and do not allow for human error. We aim
to address this gap in VR training tools for psychomotor skills training by
exploring an open-ended approach to the system design. We designed a VR
training simulation prototype to perform open-ended practice of drilling using
a 3-axis milling machine. The simulation employs near "end-to-end" instruction
through a safety module, a setup and drilling tutorial, open-ended practice
complete with warnings of mistakes and failures, and a function to assess the
geometries and locations of drilled holes against an engineering drawing. We
developed and conducted a user study within an undergraduate-level introductory
fabrication course to investigate the impact of open-ended VR practice on
learning outcomes. Study results reveal positive trends, with the VR group
successfully completing the machining task of drilling at a higher rate (75% vs
64%), with fewer mistakes (1.75 vs 2.14 score), and in less time (17.67 mins vs
21.57 mins) compared to the control group. We discuss our findings and
limitations and implications for the design of open-ended VR training systems
for learning psychomotor skills.
|
[
{
"created": "Thu, 26 Oct 2023 14:22:30 GMT",
"version": "v1"
}
] |
2023-10-27
|
[
[
"Lie",
"Hing",
""
],
[
"Studer",
"Kachina",
""
],
[
"Zhao",
"Zhen",
""
],
[
"Thomson",
"Ben",
""
],
[
"Turakhia",
"Dishita G",
""
],
[
"Liu",
"John",
""
]
] |
Virtual Reality (VR) can support effective and scalable training of psychomotor skills in manufacturing. However, many industry training modules offer experiences that are close-ended and do not allow for human error. We aim to address this gap in VR training tools for psychomotor skills training by exploring an open-ended approach to the system design. We designed a VR training simulation prototype to perform open-ended practice of drilling using a 3-axis milling machine. The simulation employs near "end-to-end" instruction through a safety module, a setup and drilling tutorial, open-ended practice complete with warnings of mistakes and failures, and a function to assess the geometries and locations of drilled holes against an engineering drawing. We developed and conducted a user study within an undergraduate-level introductory fabrication course to investigate the impact of open-ended VR practice on learning outcomes. Study results reveal positive trends, with the VR group successfully completing the machining task of drilling at a higher rate (75% vs 64%), with fewer mistakes (1.75 vs 2.14 score), and in less time (17.67 mins vs 21.57 mins) compared to the control group. We discuss our findings and limitations and implications for the design of open-ended VR training systems for learning psychomotor skills.
|
2406.00588
|
Xiao-Shan Gao
|
Lijia Yu and Shuang Liu and Yibo Miao and Xiao-Shan Gao and Lijun
Zhang
|
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
| null | null | null | null |
cs.LG cs.CR math.ST stat.TH
|
http://creativecommons.org/licenses/by/4.0/
|
The generalization bound is a crucial theoretical tool for assessing the
generalizability of learning methods and there exist vast literatures on
generalizability of normal learning, adversarial learning, and data poisoning.
Unlike other data poison attacks, the backdoor attack has the special property
that the poisoned triggers are contained in both the training set and the test
set and the purpose of the attack is two-fold. To our knowledge, the
generalization bound for the backdoor attack has not been established. In this
paper, we fill this gap by deriving algorithm-independent generalization bounds
in the clean-label backdoor attack scenario. Precisely, based on the goals of
backdoor attack, we give upper bounds for the clean sample population errors
and the poison population errors in terms of the empirical error on the
poisoned training dataset. Furthermore, based on the theoretical result, a new
clean-label backdoor attack is proposed that computes the poisoning trigger by
combining adversarial noise and indiscriminate poison. We show its
effectiveness in a variety of settings.
|
[
{
"created": "Sun, 2 Jun 2024 01:46:58 GMT",
"version": "v1"
}
] |
2024-06-05
|
[
[
"Yu",
"Lijia",
""
],
[
"Liu",
"Shuang",
""
],
[
"Miao",
"Yibo",
""
],
[
"Gao",
"Xiao-Shan",
""
],
[
"Zhang",
"Lijun",
""
]
] |
The generalization bound is a crucial theoretical tool for assessing the generalizability of learning methods and there exist vast literatures on generalizability of normal learning, adversarial learning, and data poisoning. Unlike other data poison attacks, the backdoor attack has the special property that the poisoned triggers are contained in both the training set and the test set and the purpose of the attack is two-fold. To our knowledge, the generalization bound for the backdoor attack has not been established. In this paper, we fill this gap by deriving algorithm-independent generalization bounds in the clean-label backdoor attack scenario. Precisely, based on the goals of backdoor attack, we give upper bounds for the clean sample population errors and the poison population errors in terms of the empirical error on the poisoned training dataset. Furthermore, based on the theoretical result, a new clean-label backdoor attack is proposed that computes the poisoning trigger by combining adversarial noise and indiscriminate poison. We show its effectiveness in a variety of settings.
|
1511.02919
|
Deepti Ghadiyaram
|
Deepti Ghadiyaram and Alan C. Bovik
|
Massive Online Crowdsourced Study of Subjective and Objective Picture
Quality
|
16 pages
| null |
10.1109/TIP.2015.2500021
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most publicly available image quality databases have been created under
highly controlled conditions by introducing graded simulated distortions onto
high-quality photographs. However, images captured using typical real-world
mobile camera devices are usually afflicted by complex mixtures of multiple
distortions, which are not necessarily well-modeled by the synthetic
distortions found in existing databases. The originators of existing legacy
databases usually conducted human psychometric studies to obtain statistically
meaningful sets of human opinion scores on images in a stringently controlled
visual environment, resulting in small data collections relative to other kinds
of image analysis databases. Towards overcoming these limitations, we designed
and created a new database that we call the LIVE In the Wild Image Quality
Challenge Database, which contains widely diverse authentic image distortions
on a large number of images captured using a representative variety of modern
mobile devices. We also designed and implemented a new online crowdsourcing
system, which we have used to conduct a very large-scale, multi-month image
quality assessment subjective study. Our database consists of over 350000
opinion scores on 1162 images evaluated by over 7000 unique human observers.
Despite the lack of control over the experimental environments of the numerous
study participants, we demonstrate excellent internal consistency of the
subjective dataset. We also evaluate several top-performing blind Image Quality
Assessment algorithms on it and present insights on how mixtures of distortions
challenge both end users as well as automatic perceptual quality prediction
models.
|
[
{
"created": "Mon, 9 Nov 2015 22:39:58 GMT",
"version": "v1"
}
] |
2016-01-20
|
[
[
"Ghadiyaram",
"Deepti",
""
],
[
"Bovik",
"Alan C.",
""
]
] |
Most publicly available image quality databases have been created under highly controlled conditions by introducing graded simulated distortions onto high-quality photographs. However, images captured using typical real-world mobile camera devices are usually afflicted by complex mixtures of multiple distortions, which are not necessarily well-modeled by the synthetic distortions found in existing databases. The originators of existing legacy databases usually conducted human psychometric studies to obtain statistically meaningful sets of human opinion scores on images in a stringently controlled visual environment, resulting in small data collections relative to other kinds of image analysis databases. Towards overcoming these limitations, we designed and created a new database that we call the LIVE In the Wild Image Quality Challenge Database, which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we have used to conduct a very large-scale, multi-month image quality assessment subjective study. Our database consists of over 350000 opinion scores on 1162 images evaluated by over 7000 unique human observers. Despite the lack of control over the experimental environments of the numerous study participants, we demonstrate excellent internal consistency of the subjective dataset. We also evaluate several top-performing blind Image Quality Assessment algorithms on it and present insights on how mixtures of distortions challenge both end users as well as automatic perceptual quality prediction models.
|
2206.01970
|
Haosen Wang
|
Enqiang Zhu, Haosen Wang, Yu Zhang, Kai Zhang, Chanjuan Liu
|
PHEE: A phased hybrid evaluation-enhanced approach for identifying
influential users in social networks
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the purpose of maximizing the spread of influence caused by a certain
small number k of nodes in a social network, we are asked to find a k-subset of
nodes (i.e., a seed set) with the best capacity to influence the nodes not in
it. This problem of influence maximization (IM) has wide application, belongs
to subset problems, and is NP-hard. To solve it, we should theoretically
examine all seed sets and evaluate their influence spreads, which is
time-consuming. Therefore, metaheuristic strategies are generally employed to
gain a good seed set within a reasonable time. We observe that many algorithms
for the IM problem only adopt a uniform mechanism in the whole solution search
process, which lacks a response measure when the algorithm becomes trapped in a
local optimum. To address this issue, we propose a phased hybrid
evaluation-enhanced (PHEE) approach for IM, which utilizes two distinct search
strategies to enhance the search of optimal solutions: a randomized range
division evolutionary (RandRDE) algorithm to improve the solution quality, and
a fast convergence strategy. Our approach is evaluated on 10 real-world social
networks of different sizes and types. Experimental results demonstrate that
our algorithm is efficient and obtains the best influence spread for all the
datasets compared with three state-of-the-art algorithms, outperforms the time
consuming CELF algorithm on four datasets, and performs worse than CELF on only
two networks.
|
[
{
"created": "Sat, 4 Jun 2022 12:00:08 GMT",
"version": "v1"
}
] |
2022-06-07
|
[
[
"Zhu",
"Enqiang",
""
],
[
"Wang",
"Haosen",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Kai",
""
],
[
"Liu",
"Chanjuan",
""
]
] |
For the purpose of maximizing the spread of influence caused by a certain small number k of nodes in a social network, we are asked to find a k-subset of nodes (i.e., a seed set) with the best capacity to influence the nodes not in it. This problem of influence maximization (IM) has wide application, belongs to subset problems, and is NP-hard. To solve it, we should theoretically examine all seed sets and evaluate their influence spreads, which is time-consuming. Therefore, metaheuristic strategies are generally employed to gain a good seed set within a reasonable time. We observe that many algorithms for the IM problem only adopt a uniform mechanism in the whole solution search process, which lacks a response measure when the algorithm becomes trapped in a local optimum. To address this issue, we propose a phased hybrid evaluation-enhanced (PHEE) approach for IM, which utilizes two distinct search strategies to enhance the search of optimal solutions: a randomized range division evolutionary (RandRDE) algorithm to improve the solution quality, and a fast convergence strategy. Our approach is evaluated on 10 real-world social networks of different sizes and types. Experimental results demonstrate that our algorithm is efficient and obtains the best influence spread for all the datasets compared with three state-of-the-art algorithms, outperforms the time consuming CELF algorithm on four datasets, and performs worse than CELF on only two networks.
|
2203.10581
|
Ariel Gera
|
Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen,
Ranit Aharonov, Noam Slonim
|
Cluster & Tune: Boost Cold Start Performance in Text Classification
|
9 pages, 6 figures; To be published in ACL 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In real-world scenarios, a text classification task often begins with a cold
start, when labeled data is scarce. In such cases, the common practice of
fine-tuning pre-trained models, such as BERT, for a target classification task,
is prone to produce poor performance. We suggest a method to boost the
performance of such models by adding an intermediate unsupervised
classification task, between the pre-training and fine-tuning phases. As such
an intermediate task, we perform clustering and train the pre-trained model on
predicting the cluster labels. We test this hypothesis on various data sets,
and show that this additional classification phase can significantly improve
performance, mainly for topical classification tasks, when the number of
labeled instances available for fine-tuning is only a couple of dozen to a few
hundred.
|
[
{
"created": "Sun, 20 Mar 2022 15:29:34 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"Shnarch",
"Eyal",
""
],
[
"Gera",
"Ariel",
""
],
[
"Halfon",
"Alon",
""
],
[
"Dankin",
"Lena",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Aharonov",
"Ranit",
""
],
[
"Slonim",
"Noam",
""
]
] |
In real-world scenarios, a text classification task often begins with a cold start, when labeled data is scarce. In such cases, the common practice of fine-tuning pre-trained models, such as BERT, for a target classification task, is prone to produce poor performance. We suggest a method to boost the performance of such models by adding an intermediate unsupervised classification task, between the pre-training and fine-tuning phases. As such an intermediate task, we perform clustering and train the pre-trained model on predicting the cluster labels. We test this hypothesis on various data sets, and show that this additional classification phase can significantly improve performance, mainly for topical classification tasks, when the number of labeled instances available for fine-tuning is only a couple of dozen to a few hundred.
|
2012.01644
|
Joy Hsu
|
Joy Hsu, Jeffrey Gu, Gong-Her Wu, Wah Chiu, Serena Yeung
|
Capturing implicit hierarchical structure in 3D biomedical images with
self-supervised hyperbolic representations
|
To appear at NeurIPS 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the task of representation learning for unsupervised segmentation
of 3D voxel-grid biomedical images. We show that models that capture implicit
hierarchical relationships between subvolumes are better suited for this task.
To that end, we consider encoder-decoder architectures with a hyperbolic latent
space, to explicitly capture hierarchical relationships present in subvolumes
of the data. We propose utilizing a 3D hyperbolic variational autoencoder with
a novel gyroplane convolutional layer to map from the embedding space back to
3D images. To capture these relationships, we introduce an essential
self-supervised loss -- in addition to the standard VAE loss -- which infers
approximate hierarchies and encourages implicitly related subvolumes to be
mapped closer in the embedding space. We present experiments on both synthetic
data and biomedical data to validate our hypothesis.
|
[
{
"created": "Thu, 3 Dec 2020 02:15:31 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Dec 2020 23:28:46 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Oct 2021 22:36:34 GMT",
"version": "v3"
}
] |
2021-10-27
|
[
[
"Hsu",
"Joy",
""
],
[
"Gu",
"Jeffrey",
""
],
[
"Wu",
"Gong-Her",
""
],
[
"Chiu",
"Wah",
""
],
[
"Yeung",
"Serena",
""
]
] |
We consider the task of representation learning for unsupervised segmentation of 3D voxel-grid biomedical images. We show that models that capture implicit hierarchical relationships between subvolumes are better suited for this task. To that end, we consider encoder-decoder architectures with a hyperbolic latent space, to explicitly capture hierarchical relationships present in subvolumes of the data. We propose utilizing a 3D hyperbolic variational autoencoder with a novel gyroplane convolutional layer to map from the embedding space back to 3D images. To capture these relationships, we introduce an essential self-supervised loss -- in addition to the standard VAE loss -- which infers approximate hierarchies and encourages implicitly related subvolumes to be mapped closer in the embedding space. We present experiments on both synthetic data and biomedical data to validate our hypothesis.
|
1807.02816
|
The-Hien Dang-Ha
|
The-Hien Dang-Ha
|
Improving Deep Learning through Automatic Programming
|
Master's thesis (2014)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning and deep architectures are emerging as the best machine
learning methods so far in many practical applications such as reducing the
dimensionality of data, image classification, speech recognition or object
segmentation. In fact, many leading technology companies such as Google,
Microsoft or IBM are researching and using deep architectures in their systems
to replace other traditional models. Therefore, improving the performance of
these models could make a strong impact in the area of machine learning.
However, deep learning is a very fast-growing research domain with many core
methodologies and paradigms just discovered over the last few years. This
thesis will first serve as a short summary of deep learning, which tries to
include all of the most important ideas in this research area. Based on this
knowledge, we suggested, and conducted some experiments to investigate the
possibility of improving the deep learning based on automatic programming
(ADATE). Although our experiments did produce good results, there are still
many more possibilities that we could not try due to limited time as well as
some limitations of the current ADATE version. I hope that this thesis can
promote future work on this topic, especially when the next version of ADATE
comes out. This thesis also includes a short analysis of the power of ADATE
system, which could be useful for other researchers who want to know what it is
capable of.
|
[
{
"created": "Sun, 8 Jul 2018 13:38:21 GMT",
"version": "v1"
}
] |
2018-07-10
|
[
[
"Dang-Ha",
"The-Hien",
""
]
] |
Deep learning and deep architectures are emerging as the best machine learning methods so far in many practical applications such as reducing the dimensionality of data, image classification, speech recognition or object segmentation. In fact, many leading technology companies such as Google, Microsoft or IBM are researching and using deep architectures in their systems to replace other traditional models. Therefore, improving the performance of these models could make a strong impact in the area of machine learning. However, deep learning is a very fast-growing research domain with many core methodologies and paradigms just discovered over the last few years. This thesis will first serve as a short summary of deep learning, which tries to include all of the most important ideas in this research area. Based on this knowledge, we suggested, and conducted some experiments to investigate the possibility of improving the deep learning based on automatic programming (ADATE). Although our experiments did produce good results, there are still many more possibilities that we could not try due to limited time as well as some limitations of the current ADATE version. I hope that this thesis can promote future work on this topic, especially when the next version of ADATE comes out. This thesis also includes a short analysis of the power of ADATE system, which could be useful for other researchers who want to know what it is capable of.
|
0911.3306
|
Pascal Urso
|
Nacer Boudjlida (INRIA Lorraine - LORIA), Jean-Pierre Jacquot (LORIA),
Pascal Urso (INRIA Lorraine - LORIA)
|
Software Engineering Education by Example
| null |
5th China - Europe International Symposium on Software Industry
Oriented Education (CEISIE 2009), Bordeaux : France (2009)
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Based on the old but famous distinction between "in the small" and "in the
large" software development, at Nancy Universit\'e, UHP Nancy 1, we experience
for a while software engineering education thanks to actual project
engineering. This education method has the merit to enable students to discover
and to overcome actual problems when faced to a large project which may be
conducted by a large development team. The mode of education is a simulation of
an actual software engineering project as encountered in "real life\'e"
activities.
|
[
{
"created": "Tue, 17 Nov 2009 13:36:59 GMT",
"version": "v1"
}
] |
2009-11-18
|
[
[
"Boudjlida",
"Nacer",
"",
"INRIA Lorraine - LORIA"
],
[
"Jacquot",
"Jean-Pierre",
"",
"LORIA"
],
[
"Urso",
"Pascal",
"",
"INRIA Lorraine - LORIA"
]
] |
Based on the old but famous distinction between "in the small" and "in the large" software development, at Nancy Universit\'e, UHP Nancy 1, we experience for a while software engineering education thanks to actual project engineering. This education method has the merit to enable students to discover and to overcome actual problems when faced to a large project which may be conducted by a large development team. The mode of education is a simulation of an actual software engineering project as encountered in "real life\'e" activities.
|
2012.01172
|
Burak Yildiz
|
Burak Yildiz, Hayley Hung, Jesse H. Krijthe, Cynthia C. S. Liem, Marco
Loog, Gosia Migut, Frans Oliehoek, Annibale Panichella, Przemyslaw Pawelczak,
Stjepan Picek, Mathijs de Weerdt, and Jan van Gemert
|
ReproducedPapers.org: Openly teaching and structuring machine learning
reproducibility
|
Accepted to RRPR 2020: Third Workshop on Reproducible Research in
Pattern Recognition
| null |
10.1007/978-3-030-76423-4_1
| null |
cs.CY cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present ReproducedPapers.org: an open online repository for teaching and
structuring machine learning reproducibility. We evaluate doing a reproduction
project among students and the added value of an online reproduction repository
among AI researchers. We use anonymous self-assessment surveys and obtained 144
responses. Results suggest that students who do a reproduction project place
more value on scientific reproductions and become more critical thinkers.
Students and AI researchers agree that our online reproduction repository is
valuable.
|
[
{
"created": "Tue, 1 Dec 2020 11:19:45 GMT",
"version": "v1"
}
] |
2021-06-11
|
[
[
"Yildiz",
"Burak",
""
],
[
"Hung",
"Hayley",
""
],
[
"Krijthe",
"Jesse H.",
""
],
[
"Liem",
"Cynthia C. S.",
""
],
[
"Loog",
"Marco",
""
],
[
"Migut",
"Gosia",
""
],
[
"Oliehoek",
"Frans",
""
],
[
"Panichella",
"Annibale",
""
],
[
"Pawelczak",
"Przemyslaw",
""
],
[
"Picek",
"Stjepan",
""
],
[
"de Weerdt",
"Mathijs",
""
],
[
"van Gemert",
"Jan",
""
]
] |
We present ReproducedPapers.org: an open online repository for teaching and structuring machine learning reproducibility. We evaluate doing a reproduction project among students and the added value of an online reproduction repository among AI researchers. We use anonymous self-assessment surveys and obtained 144 responses. Results suggest that students who do a reproduction project place more value on scientific reproductions and become more critical thinkers. Students and AI researchers agree that our online reproduction repository is valuable.
|
1203.4627
|
Vasilis Gkatzelis
|
Richard Cole, Vasilis Gkatzelis, Gagan Goel
|
Truthfulness, Proportional Fairness, and Efficiency
| null | null | null | null |
cs.GT cs.DS cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How does one allocate a collection of resources to a set of strategic agents
in a fair and efficient manner without using money? For in many scenarios it is
not feasible to use money to compensate agents for otherwise unsatisfactory
outcomes. This paper studies this question, looking at both fairness and
efficiency measures.
We employ the proportionally fair solution, which is a well-known fairness
concept for money-free settings. But although finding a proportionally fair
solution is computationally tractable, it cannot be implemented in a truthful
fashion. Consequently, we seek approximate solutions. We give several truthful
mechanisms which achieve proportional fairness in an approximate sense. We use
a strong notion of approximation, requiring the mechanism to give each agent a
good approximation of its proportionally fair utility. In particular, one of
our mechanisms provides a better and better approximation factor as the minimum
demand for every good increases. A motivating example is provided by the
massive privatization auction in the Czech republic in the early 90s.
With regard to efficiency, prior work has shown a lower bound of 0.5 on the
approximation factor of any swap-dictatorial mechanism approximating a social
welfare measure even for the two agents and multiple goods case. We surpass
this lower bound by designing a non-swap-dictatorial mechanism for this case.
Interestingly, the new mechanism builds on the notion of proportional fairness.
|
[
{
"created": "Tue, 20 Mar 2012 23:55:49 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jul 2012 22:16:17 GMT",
"version": "v2"
}
] |
2012-07-10
|
[
[
"Cole",
"Richard",
""
],
[
"Gkatzelis",
"Vasilis",
""
],
[
"Goel",
"Gagan",
""
]
] |
How does one allocate a collection of resources to a set of strategic agents in a fair and efficient manner without using money? For in many scenarios it is not feasible to use money to compensate agents for otherwise unsatisfactory outcomes. This paper studies this question, looking at both fairness and efficiency measures. We employ the proportionally fair solution, which is a well-known fairness concept for money-free settings. But although finding a proportionally fair solution is computationally tractable, it cannot be implemented in a truthful fashion. Consequently, we seek approximate solutions. We give several truthful mechanisms which achieve proportional fairness in an approximate sense. We use a strong notion of approximation, requiring the mechanism to give each agent a good approximation of its proportionally fair utility. In particular, one of our mechanisms provides a better and better approximation factor as the minimum demand for every good increases. A motivating example is provided by the massive privatization auction in the Czech republic in the early 90s. With regard to efficiency, prior work has shown a lower bound of 0.5 on the approximation factor of any swap-dictatorial mechanism approximating a social welfare measure even for the two agents and multiple goods case. We surpass this lower bound by designing a non-swap-dictatorial mechanism for this case. Interestingly, the new mechanism builds on the notion of proportional fairness.
|
2404.00987
|
Ruowen Zhao
|
Ruowen Zhao, Zhengyi Wang, Yikai Wang, Zihan Zhou and Jun Zhu
|
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
|
Project page: https://flexidreamer.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D content generation has wide applications in various fields. One of its
dominant paradigms is by sparse-view reconstruction using multi-view images
generated by diffusion models. However, since directly reconstructing triangle
meshes from multi-view images is challenging, most methodologies opt to an
implicit representation (such as NeRF) during the sparse-view reconstruction
and acquire the target mesh by a post-processing extraction. However, the
implicit representation takes extensive time to train and the post-extraction
also leads to undesirable visual artifacts. In this paper, we propose
FlexiDreamer, a novel framework that directly reconstructs high-quality meshes
from multi-view generated images. We utilize an advanced gradient-based mesh
optimization, namely FlexiCubes, for multi-view mesh reconstruction, which
enables us to generate 3D meshes in an end-to-end manner. To address the
reconstruction artifacts owing to the inconsistencies from generated images, we
design a hybrid positional encoding scheme to improve the reconstruction
geometry and an orientation-aware texture mapping to mitigate surface ghosting.
To further enhance the results, we respectively incorporate eikonal and smooth
regularizations to reduce geometric holes and surface noise. Our approach can
generate high-fidelity 3D meshes in the single image-to-3D downstream task with
approximately 1 minute, significantly outperforming previous methods.
|
[
{
"created": "Mon, 1 Apr 2024 08:20:18 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 09:51:37 GMT",
"version": "v2"
}
] |
2024-05-28
|
[
[
"Zhao",
"Ruowen",
""
],
[
"Wang",
"Zhengyi",
""
],
[
"Wang",
"Yikai",
""
],
[
"Zhou",
"Zihan",
""
],
[
"Zhu",
"Jun",
""
]
] |
3D content generation has wide applications in various fields. One of its dominant paradigms is by sparse-view reconstruction using multi-view images generated by diffusion models. However, since directly reconstructing triangle meshes from multi-view images is challenging, most methodologies opt to an implicit representation (such as NeRF) during the sparse-view reconstruction and acquire the target mesh by a post-processing extraction. However, the implicit representation takes extensive time to train and the post-extraction also leads to undesirable visual artifacts. In this paper, we propose FlexiDreamer, a novel framework that directly reconstructs high-quality meshes from multi-view generated images. We utilize an advanced gradient-based mesh optimization, namely FlexiCubes, for multi-view mesh reconstruction, which enables us to generate 3D meshes in an end-to-end manner. To address the reconstruction artifacts owing to the inconsistencies from generated images, we design a hybrid positional encoding scheme to improve the reconstruction geometry and an orientation-aware texture mapping to mitigate surface ghosting. To further enhance the results, we respectively incorporate eikonal and smooth regularizations to reduce geometric holes and surface noise. Our approach can generate high-fidelity 3D meshes in the single image-to-3D downstream task with approximately 1 minute, significantly outperforming previous methods.
|
1712.09509
|
Zhiqing Sun
|
Zhiqing Sun, Gehui Shen, Zhihong Deng
|
A Gap-Based Framework for Chinese Word Segmentation via Very Deep
Convolutional Networks
|
Under review as a conference paper at ACL 2018; 10 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most previous approaches to Chinese word segmentation can be roughly
classified into character-based and word-based methods. The former regards this
task as a sequence-labeling problem, while the latter directly segments
character sequence into words. However, if we consider segmenting a given
sentence, the most intuitive idea is to predict whether to segment for each gap
between two consecutive characters, which in comparison makes previous
approaches seem too complex. Therefore, in this paper, we propose a gap-based
framework to implement this intuitive idea. Moreover, very deep convolutional
neural networks, namely, ResNets and DenseNets, are exploited in our
experiments. Results show that our approach outperforms the best
character-based and word-based methods on 5 benchmarks, without any further
post-processing module (e.g. Conditional Random Fields) nor beam search.
|
[
{
"created": "Wed, 27 Dec 2017 06:44:02 GMT",
"version": "v1"
}
] |
2017-12-29
|
[
[
"Sun",
"Zhiqing",
""
],
[
"Shen",
"Gehui",
""
],
[
"Deng",
"Zhihong",
""
]
] |
Most previous approaches to Chinese word segmentation can be roughly classified into character-based and word-based methods. The former regards this task as a sequence-labeling problem, while the latter directly segments character sequence into words. However, if we consider segmenting a given sentence, the most intuitive idea is to predict whether to segment for each gap between two consecutive characters, which in comparison makes previous approaches seem too complex. Therefore, in this paper, we propose a gap-based framework to implement this intuitive idea. Moreover, very deep convolutional neural networks, namely, ResNets and DenseNets, are exploited in our experiments. Results show that our approach outperforms the best character-based and word-based methods on 5 benchmarks, without any further post-processing module (e.g. Conditional Random Fields) nor beam search.
|
1710.08986
|
Dimitri Scheftelowitsch
|
Dimitri Scheftelowitsch, Peter Buchholz, Vahid Hashemi, Holger
Hermanns
|
Multi-Objective Approaches to Markov Decision Processes with Uncertain
Transition Parameters
|
9 pages, 5 figures, preprint for VALUETOOLS 2017
| null | null | null |
cs.AI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Markov decision processes (MDPs) are a popular model for performance analysis
and optimization of stochastic systems. The parameters of stochastic behavior
of MDPs are estimates from empirical observations of a system; their values are
not known precisely. Different types of MDPs with uncertain, imprecise or
bounded transition rates or probabilities and rewards exist in the literature.
Commonly, analysis of models with uncertainties amounts to searching for the
most robust policy which means that the goal is to generate a policy with the
greatest lower bound on performance (or, symmetrically, the lowest upper bound
on costs). However, hedging against an unlikely worst case may lead to losses
in other situations. In general, one is interested in policies that behave well
in all situations which results in a multi-objective view on decision making.
In this paper, we consider policies for the expected discounted reward
measure of MDPs with uncertain parameters. In particular, the approach is
defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best
and average case performances of a policy are analyzed simultaneously, which
yields a multi-scenario multi-objective optimization problem. The paper
presents and evaluates approaches to compute the pure Pareto optimal policies
in the value vector space.
|
[
{
"created": "Fri, 20 Oct 2017 07:47:41 GMT",
"version": "v1"
}
] |
2017-10-26
|
[
[
"Scheftelowitsch",
"Dimitri",
""
],
[
"Buchholz",
"Peter",
""
],
[
"Hashemi",
"Vahid",
""
],
[
"Hermanns",
"Holger",
""
]
] |
Markov decision processes (MDPs) are a popular model for performance analysis and optimization of stochastic systems. The parameters of stochastic behavior of MDPs are estimates from empirical observations of a system; their values are not known precisely. Different types of MDPs with uncertain, imprecise or bounded transition rates or probabilities and rewards exist in the literature. Commonly, analysis of models with uncertainties amounts to searching for the most robust policy which means that the goal is to generate a policy with the greatest lower bound on performance (or, symmetrically, the lowest upper bound on costs). However, hedging against an unlikely worst case may lead to losses in other situations. In general, one is interested in policies that behave well in all situations which results in a multi-objective view on decision making. In this paper, we consider policies for the expected discounted reward measure of MDPs with uncertain parameters. In particular, the approach is defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best and average case performances of a policy are analyzed simultaneously, which yields a multi-scenario multi-objective optimization problem. The paper presents and evaluates approaches to compute the pure Pareto optimal policies in the value vector space.
|
1612.05878
|
Suzhi Bi
|
Suzhi Bi and Ying Jun Zhang
|
Graph-based Cyber Security Analysis of State Estimation in Smart Power
Grid
|
This article has been accepted for publication by IEEE Communications
Magazine (Dec 2016)
| null | null | null |
cs.SY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart power grid enables intelligent automation at all levels of power system
operation, from electricity generation at power plants to power usage at
households. The key enabling factor of an efficient smart grid is its built-in
information and communication technology (ICT) that monitors the real-time
system operating state and makes control decisions accordingly. As an important
building block of the ICT system, power system state estimation is of critical
importance to maintain normal operation of the smart grid, which, however, is
under mounting threat from potential cyber attacks. In this article, we
introduce a graph-based framework for performing cyber-security analysis in
power system state estimation. Compared to conventional arithmetic-based
security analysis, the graphical characterization of state estimation security
provides intuitive visualization of some complex problem structures and enables
efficient graphical solution algorithms, which are useful for both defending
and attacking the ICT system of smart grid. We also highlight several promising
future research directions on graph-based security analysis and its
applications in smart power grid.
|
[
{
"created": "Sun, 18 Dec 2016 09:27:40 GMT",
"version": "v1"
}
] |
2016-12-20
|
[
[
"Bi",
"Suzhi",
""
],
[
"Zhang",
"Ying Jun",
""
]
] |
Smart power grid enables intelligent automation at all levels of power system operation, from electricity generation at power plants to power usage at households. The key enabling factor of an efficient smart grid is its built-in information and communication technology (ICT) that monitors the real-time system operating state and makes control decisions accordingly. As an important building block of the ICT system, power system state estimation is of critical importance to maintain normal operation of the smart grid, which, however, is under mounting threat from potential cyber attacks. In this article, we introduce a graph-based framework for performing cyber-security analysis in power system state estimation. Compared to conventional arithmetic-based security analysis, the graphical characterization of state estimation security provides intuitive visualization of some complex problem structures and enables efficient graphical solution algorithms, which are useful for both defending and attacking the ICT system of smart grid. We also highlight several promising future research directions on graph-based security analysis and its applications in smart power grid.
|
2203.16875
|
Xiangjun Gao
|
Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin
Tong
|
MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been rapid progress recently on 3D human rendering, including novel
view synthesis and pose animation, based on the advances of neural radiance
fields (NeRF). However, most existing methods focus on person-specific training
and their training typically requires multi-view videos. This paper deals with
a new challenging task -- rendering novel views and novel poses for a person
unseen in training, using only multiview images as input. For this task, we
propose a simple yet effective method to train a generalizable NeRF with
multiview images as conditional input. The key ingredient is a dedicated
representation combining a canonical NeRF and a volume deformation scheme.
Using a canonical space enables our method to learn shared properties of human
and easily generalize to different people. Volume deformation is used to
connect the canonical space with input and target images and query image
features for radiance and density prediction. We leverage the parametric 3D
human model fitted on the input images to derive the deformation, which works
quite well in practice when combined with our canonical NeRF. The experiments
on both real and synthetic data with the novel view synthesis and pose
animation tasks collectively demonstrate the efficacy of our method.
|
[
{
"created": "Thu, 31 Mar 2022 08:09:03 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jul 2022 06:10:50 GMT",
"version": "v2"
}
] |
2022-07-28
|
[
[
"Gao",
"Xiangjun",
""
],
[
"Yang",
"Jiaolong",
""
],
[
"Kim",
"Jongyoo",
""
],
[
"Peng",
"Sida",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Tong",
"Xin",
""
]
] |
There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task -- rendering novel views and novel poses for a person unseen in training, using only multiview images as input. For this task, we propose a simple yet effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.
|
2311.10388
|
Junjie Zhao
|
Junjie Zhao and Xiang Chen and Guang Yang and Yiheng Shen
|
Automatic Smart Contract Comment Generation via Large Language Models
and In-Context Learning
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The previous smart contract code comment (SCC) generation approaches can be
divided into two categories: fine-tuning paradigm-based approaches and
information retrieval-based approaches. However, for the fine-tuning
paradigm-based approaches, the performance may be limited by the quality of the
gathered dataset for the downstream task and they may have knowledge-forgetting
issues. While for the information retrieval-based approaches, it is difficult
for them to generate high-quality comments if similar code does not exist in
the historical repository. Therefore we want to utilize the domain knowledge
related to SCC generation in large language models (LLMs) to alleviate the
disadvantages of these two types of approaches. In this study, we propose an
approach SCCLLM based on LLMs and in-context learning. Specifically, in the
demonstration selection phase, SCCLLM retrieves the top-k code snippets from
the historical corpus by considering syntax, semantics, and lexical
information. In the in-context learning phase, SCCLLM utilizes the retrieved
code snippets as demonstrations, which can help to utilize the related
knowledge for this task. We select a large corpus from a smart contract
community Etherscan.io as our experimental subject. Extensive experimental
results show the effectiveness of SCCLLM when compared with baselines in
automatic evaluation and human evaluation.
|
[
{
"created": "Fri, 17 Nov 2023 08:31:09 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jan 2024 07:58:25 GMT",
"version": "v2"
}
] |
2024-01-17
|
[
[
"Zhao",
"Junjie",
""
],
[
"Chen",
"Xiang",
""
],
[
"Yang",
"Guang",
""
],
[
"Shen",
"Yiheng",
""
]
] |
The previous smart contract code comment (SCC) generation approaches can be divided into two categories: fine-tuning paradigm-based approaches and information retrieval-based approaches. However, for the fine-tuning paradigm-based approaches, the performance may be limited by the quality of the gathered dataset for the downstream task and they may have knowledge-forgetting issues. While for the information retrieval-based approaches, it is difficult for them to generate high-quality comments if similar code does not exist in the historical repository. Therefore we want to utilize the domain knowledge related to SCC generation in large language models (LLMs) to alleviate the disadvantages of these two types of approaches. In this study, we propose an approach SCCLLM based on LLMs and in-context learning. Specifically, in the demonstration selection phase, SCCLLM retrieves the top-k code snippets from the historical corpus by considering syntax, semantics, and lexical information. In the in-context learning phase, SCCLLM utilizes the retrieved code snippets as demonstrations, which can help to utilize the related knowledge for this task. We select a large corpus from a smart contract community Etherscan.io as our experimental subject. Extensive experimental results show the effectiveness of SCCLLM when compared with baselines in automatic evaluation and human evaluation.
|
2108.10061
|
Larkin Liu
|
Larkin Liu, Jun Tao Luo
|
An Extensible and Modular Design and Implementation of Monte Carlo Tree
Search for the JVM
|
18 pages, 7 figures, Manuscript
| null | null | null |
cs.LG stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Flexible implementations of Monte Carlo Tree Search (MCTS), combined with
domain specific knowledge and hybridization with other search algorithms, can
be powerful for finding the solutions to problems in complex planning. We
introduce mctreesearch4j, an MCTS implementation written as a standard JVM
library following key design principles of object oriented programming. We
define key class abstractions allowing the MCTS library to flexibly adapt to
any well defined Markov Decision Process or turn-based adversarial game.
Furthermore, our library is designed to be modular and extensible, utilizing
class inheritance and generic typing to standardize custom algorithm
definitions. We demonstrate that the design of the MCTS implementation provides
ease of adaptation for unique heuristics and customization across varying
Markov Decision Process (MDP) domains. In addition, the implementation is
reasonably performant and accurate for standard MDP's. In addition, via the
implementation of mctreesearch4j, the nuances of different types of MCTS
algorithms are discussed.
|
[
{
"created": "Fri, 30 Jul 2021 08:17:04 GMT",
"version": "v1"
}
] |
2021-08-24
|
[
[
"Liu",
"Larkin",
""
],
[
"Luo",
"Jun Tao",
""
]
] |
Flexible implementations of Monte Carlo Tree Search (MCTS), combined with domain specific knowledge and hybridization with other search algorithms, can be powerful for finding the solutions to problems in complex planning. We introduce mctreesearch4j, an MCTS implementation written as a standard JVM library following key design principles of object oriented programming. We define key class abstractions allowing the MCTS library to flexibly adapt to any well defined Markov Decision Process or turn-based adversarial game. Furthermore, our library is designed to be modular and extensible, utilizing class inheritance and generic typing to standardize custom algorithm definitions. We demonstrate that the design of the MCTS implementation provides ease of adaptation for unique heuristics and customization across varying Markov Decision Process (MDP) domains. In addition, the implementation is reasonably performant and accurate for standard MDP's. In addition, via the implementation of mctreesearch4j, the nuances of different types of MCTS algorithms are discussed.
|
1606.03212
|
Furong Huang
|
Furong Huang
|
Discovery of Latent Factors in High-dimensional Data Using Tensor
Methods
|
Ph.D. Thesis
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised learning aims at the discovery of hidden structure that drives
the observations in the real world. It is essential for success in modern
machine learning. Latent variable models are versatile in unsupervised learning
and have applications in almost every domain. Training latent variable models
is challenging due to the non-convexity of the likelihood objective. An
alternative method is based on the spectral decomposition of low order moment
tensors. This versatile framework is guaranteed to estimate the correct model
consistently. My thesis spans both theoretical analysis of tensor decomposition
framework and practical implementation of various applications. This thesis
presents theoretical results on convergence to globally optimal solution of
tensor decomposition using the stochastic gradient descent, despite
non-convexity of the objective. This is the first work that gives global
convergence guarantees for the stochastic gradient descent on non-convex
functions with exponentially many local minima and saddle points. This thesis
also presents large-scale deployment of spectral methods carried out on various
platforms. Dimensionality reduction techniques such as random projection are
incorporated for a highly parallel and scalable tensor decomposition algorithm.
We obtain a gain in both accuracies and in running times by several orders of
magnitude compared to the state-of-art variational methods. To solve real world
problems, more advanced models and learning algorithms are proposed. This
thesis discusses generalization of LDA model to mixed membership stochastic
block model for learning user communities in social network, convolutional
dictionary model for learning word-sequence embeddings, hierarchical tensor
decomposition and latent tree structure model for learning disease hierarchy,
and spatial point process mixture model for detecting cell types in
neuroscience.
|
[
{
"created": "Fri, 10 Jun 2016 07:17:00 GMT",
"version": "v1"
}
] |
2016-06-13
|
[
[
"Huang",
"Furong",
""
]
] |
Unsupervised learning aims at the discovery of hidden structure that drives the observations in the real world. It is essential for success in modern machine learning. Latent variable models are versatile in unsupervised learning and have applications in almost every domain. Training latent variable models is challenging due to the non-convexity of the likelihood objective. An alternative method is based on the spectral decomposition of low order moment tensors. This versatile framework is guaranteed to estimate the correct model consistently. My thesis spans both theoretical analysis of tensor decomposition framework and practical implementation of various applications. This thesis presents theoretical results on convergence to globally optimal solution of tensor decomposition using the stochastic gradient descent, despite non-convexity of the objective. This is the first work that gives global convergence guarantees for the stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. This thesis also presents large-scale deployment of spectral methods carried out on various platforms. Dimensionality reduction techniques such as random projection are incorporated for a highly parallel and scalable tensor decomposition algorithm. We obtain a gain in both accuracies and in running times by several orders of magnitude compared to the state-of-art variational methods. To solve real world problems, more advanced models and learning algorithms are proposed. This thesis discusses generalization of LDA model to mixed membership stochastic block model for learning user communities in social network, convolutional dictionary model for learning word-sequence embeddings, hierarchical tensor decomposition and latent tree structure model for learning disease hierarchy, and spatial point process mixture model for detecting cell types in neuroscience.
|
2308.08991
|
Yuqiang Sun
|
Yuqiang Sun, Zhengzi Xu, Chengwei Liu, Yiran Zhang, Yang Liu
|
Who is the Real Hero? Measuring Developer Contribution via
Multi-dimensional Data Integration
| null | null |
10.1109/ASE56229.2023.00102
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proper incentives are important for motivating developers in open-source
communities, which is crucial for maintaining the development of open-source
software healthy. To provide such incentives, an accurate and objective
developer contribution measurement method is needed. However, existing methods
rely heavily on manual peer review, lacking objectivity and transparency. The
metrics of some automated works about effort estimation use only syntax-level
or even text-level information, such as changed lines of code, which lack
robustness. Furthermore, some works about identifying core developers provide
only a qualitative understanding without a quantitative score or have some
project-specific parameters, which makes them not practical in real-world
projects. To this end, we propose CValue, a multidimensional information
fusion-based approach to measure developer contributions. CValue extracts both
syntax and semantic information from the source code changes in four
dimensions: modification amount, understandability, inter-function and
intra-function impact of modification. It fuses the information to produce the
contribution score for each of the commits in the projects. Experimental
results show that CValue outperforms other approaches by 19.59% on 10
real-world projects with manually labeled ground truth. We validated and proved
that the performance of CValue, which takes 83.39 seconds per commit, is
acceptable to be applied in real-world projects. Furthermore, we performed a
large-scale experiment on 174 projects and detected 2,282 developers having
inflated commits. Of these, 2,050 developers did not make any syntax
contribution; and 103 were identified as bots.
|
[
{
"created": "Thu, 17 Aug 2023 13:57:44 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Aug 2023 07:42:39 GMT",
"version": "v2"
}
] |
2023-11-21
|
[
[
"Sun",
"Yuqiang",
""
],
[
"Xu",
"Zhengzi",
""
],
[
"Liu",
"Chengwei",
""
],
[
"Zhang",
"Yiran",
""
],
[
"Liu",
"Yang",
""
]
] |
Proper incentives are important for motivating developers in open-source communities, which is crucial for maintaining the development of open-source software healthy. To provide such incentives, an accurate and objective developer contribution measurement method is needed. However, existing methods rely heavily on manual peer review, lacking objectivity and transparency. The metrics of some automated works about effort estimation use only syntax-level or even text-level information, such as changed lines of code, which lack robustness. Furthermore, some works about identifying core developers provide only a qualitative understanding without a quantitative score or have some project-specific parameters, which makes them not practical in real-world projects. To this end, we propose CValue, a multidimensional information fusion-based approach to measure developer contributions. CValue extracts both syntax and semantic information from the source code changes in four dimensions: modification amount, understandability, inter-function and intra-function impact of modification. It fuses the information to produce the contribution score for each of the commits in the projects. Experimental results show that CValue outperforms other approaches by 19.59% on 10 real-world projects with manually labeled ground truth. We validated and proved that the performance of CValue, which takes 83.39 seconds per commit, is acceptable to be applied in real-world projects. Furthermore, we performed a large-scale experiment on 174 projects and detected 2,282 developers having inflated commits. Of these, 2,050 developers did not make any syntax contribution; and 103 were identified as bots.
|
1611.06385
|
Jop Bri\"et
|
Jop Bri\"et
|
On Embeddings of $\ell_1^k$ from Locally Decodable Codes
|
Appeared earlier on ECCC (http://eccc.hpi-web.de/report/2015/086/).
This version has a slightly shorter abstract and slightly edited
introduction. Removed left-over notes
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that any $q$-query locally decodable code (LDC) gives a copy of
$\ell_1^k$ with small distortion in the Banach space of $q$-linear forms on
$\ell_{p_1}^N\times\cdots\times\ell_{p_q}^N$, provided $1/p_1 + \cdots + 1/p_q
\leq 1$ and where $k$, $N$, and the distortion are simple functions of the code
parameters. We exhibit the copy of $\ell_1^k$ by constructing a basis for it
directly from "smooth" LDC decoders. Based on this, we give alternative proofs
for known lower bounds on the length of 2-query LDCs. Using similar techniques,
we reprove known lower bounds for larger $q$. We also discuss the relation with
an alternative proof, due to Pisier, of a result of Naor, Regev, and the author
on cotype properties of projective tensor products of $\ell_p$ spaces.
|
[
{
"created": "Sat, 19 Nov 2016 15:39:20 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Nov 2016 12:04:37 GMT",
"version": "v2"
}
] |
2016-11-23
|
[
[
"Briët",
"Jop",
""
]
] |
We show that any $q$-query locally decodable code (LDC) gives a copy of $\ell_1^k$ with small distortion in the Banach space of $q$-linear forms on $\ell_{p_1}^N\times\cdots\times\ell_{p_q}^N$, provided $1/p_1 + \cdots + 1/p_q \leq 1$ and where $k$, $N$, and the distortion are simple functions of the code parameters. We exhibit the copy of $\ell_1^k$ by constructing a basis for it directly from "smooth" LDC decoders. Based on this, we give alternative proofs for known lower bounds on the length of 2-query LDCs. Using similar techniques, we reprove known lower bounds for larger $q$. We also discuss the relation with an alternative proof, due to Pisier, of a result of Naor, Regev, and the author on cotype properties of projective tensor products of $\ell_p$ spaces.
|
1811.03966
|
Lars Jaffke
|
Lars Jaffke and Paloma T. Lima
|
A Complexity Dichotomy for Critical Values of the b-Chromatic Number of
Graphs
|
20 pages, 1 figure
| null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A $b$-coloring of a graph $G$ is a proper coloring of its vertices such that
each color class contains a vertex that has at least one neighbor in all the
other color classes. The b-Coloring problem asks whether a graph $G$ has a
$b$-coloring with $k$ colors. The $b$-chromatic number of a graph $G$, denoted
by $\chi_b(G)$, is the maximum number $k$ such that $G$ admits a $b$-coloring
with $k$ colors. We consider the complexity of the b-Coloring problem, whenever
the value of $k$ is close to one of two upper bounds on $\chi_b(G)$: The
maximum degree $\Delta(G)$ plus one, and the $m$-degree, denoted by $m(G)$,
which is defined as the maximum number $i$ such that $G$ has $i$ vertices of
degree at least $i-1$. We obtain a dichotomy result stating that for fixed $k
\in \{\Delta(G) + 1 - p, m(G) - p\}$, the problem is polynomial-time solvable
whenever $p \in \{0, 1\}$ and, even when $k = 3$, it is NP-complete whenever $p
\ge 2$. We furthermore consider parameterizations of the b-Coloring problem
that involve the maximum degree $\Delta(G)$ of the input graph $G$ and give two
FPT-algorithms. First, we show that deciding whether a graph $G$ has a
$b$-coloring with $m(G)$ colors is FPT parameterized by $\Delta(G)$. Second, we
show that b-Coloring is FPT parameterized by $\Delta(G) + \ell_k(G)$, where
$\ell_k(G)$ denotes the number of vertices of degree at least $k$.
|
[
{
"created": "Fri, 9 Nov 2018 15:22:35 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Feb 2019 15:04:17 GMT",
"version": "v2"
}
] |
2019-02-12
|
[
[
"Jaffke",
"Lars",
""
],
[
"Lima",
"Paloma T.",
""
]
] |
A $b$-coloring of a graph $G$ is a proper coloring of its vertices such that each color class contains a vertex that has at least one neighbor in all the other color classes. The b-Coloring problem asks whether a graph $G$ has a $b$-coloring with $k$ colors. The $b$-chromatic number of a graph $G$, denoted by $\chi_b(G)$, is the maximum number $k$ such that $G$ admits a $b$-coloring with $k$ colors. We consider the complexity of the b-Coloring problem, whenever the value of $k$ is close to one of two upper bounds on $\chi_b(G)$: The maximum degree $\Delta(G)$ plus one, and the $m$-degree, denoted by $m(G)$, which is defined as the maximum number $i$ such that $G$ has $i$ vertices of degree at least $i-1$. We obtain a dichotomy result stating that for fixed $k \in \{\Delta(G) + 1 - p, m(G) - p\}$, the problem is polynomial-time solvable whenever $p \in \{0, 1\}$ and, even when $k = 3$, it is NP-complete whenever $p \ge 2$. We furthermore consider parameterizations of the b-Coloring problem that involve the maximum degree $\Delta(G)$ of the input graph $G$ and give two FPT-algorithms. First, we show that deciding whether a graph $G$ has a $b$-coloring with $m(G)$ colors is FPT parameterized by $\Delta(G)$. Second, we show that b-Coloring is FPT parameterized by $\Delta(G) + \ell_k(G)$, where $\ell_k(G)$ denotes the number of vertices of degree at least $k$.
|
1504.02089
|
Tomer Koren
|
Elad Hazan, Tomer Koren
|
The Computational Power of Optimization in Online Learning
| null | null | null | null |
cs.LG cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the fundamental problem of prediction with expert advice where
the experts are "optimizable": there is a black-box optimization oracle that
can be used to compute, in constant time, the leading expert in retrospect at
any point in time. In this setting, we give a novel online algorithm that
attains vanishing regret with respect to $N$ experts in total
$\widetilde{O}(\sqrt{N})$ computation time. We also give a lower bound showing
that this running time cannot be improved (up to log factors) in the oracle
model, thereby exhibiting a quadratic speedup as compared to the standard,
oracle-free setting where the required time for vanishing regret is
$\widetilde{\Theta}(N)$. These results demonstrate an exponential gap between
the power of optimization in online learning and its power in statistical
learning: in the latter, an optimization oracle---i.e., an efficient empirical
risk minimizer---allows to learn a finite hypothesis class of size $N$ in time
$O(\log{N})$. We also study the implications of our results to learning in
repeated zero-sum games, in a setting where the players have access to oracles
that compute, in constant time, their best-response to any mixed strategy of
their opponent. We show that the runtime required for approximating the minimax
value of the game in this setting is $\widetilde{\Theta}(\sqrt{N})$, yielding
again a quadratic improvement upon the oracle-free setting, where
$\widetilde{\Theta}(N)$ is known to be tight.
|
[
{
"created": "Wed, 8 Apr 2015 19:54:27 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Apr 2015 12:15:37 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Nov 2015 20:30:50 GMT",
"version": "v3"
},
{
"created": "Wed, 27 Jan 2016 09:07:59 GMT",
"version": "v4"
}
] |
2016-01-28
|
[
[
"Hazan",
"Elad",
""
],
[
"Koren",
"Tomer",
""
]
] |
We consider the fundamental problem of prediction with expert advice where the experts are "optimizable": there is a black-box optimization oracle that can be used to compute, in constant time, the leading expert in retrospect at any point in time. In this setting, we give a novel online algorithm that attains vanishing regret with respect to $N$ experts in total $\widetilde{O}(\sqrt{N})$ computation time. We also give a lower bound showing that this running time cannot be improved (up to log factors) in the oracle model, thereby exhibiting a quadratic speedup as compared to the standard, oracle-free setting where the required time for vanishing regret is $\widetilde{\Theta}(N)$. These results demonstrate an exponential gap between the power of optimization in online learning and its power in statistical learning: in the latter, an optimization oracle---i.e., an efficient empirical risk minimizer---allows to learn a finite hypothesis class of size $N$ in time $O(\log{N})$. We also study the implications of our results to learning in repeated zero-sum games, in a setting where the players have access to oracles that compute, in constant time, their best-response to any mixed strategy of their opponent. We show that the runtime required for approximating the minimax value of the game in this setting is $\widetilde{\Theta}(\sqrt{N})$, yielding again a quadratic improvement upon the oracle-free setting, where $\widetilde{\Theta}(N)$ is known to be tight.
|
1701.01170
|
Yangzihao Wang
|
Yangzihao Wang, Yuechao Pan, Andrew Davidson, Yuduo Wu, Carl Yang,
Leyuan Wang, Muhammad Osama, Chenshan Yuan, Weitang Liu, Andy T. Riffel and
John D. Owens
|
Gunrock: GPU Graph Analytics
|
52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU"
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.
|
[
{
"created": "Wed, 4 Jan 2017 22:16:07 GMT",
"version": "v1"
}
] |
2017-01-06
|
[
[
"Wang",
"Yangzihao",
""
],
[
"Pan",
"Yuechao",
""
],
[
"Davidson",
"Andrew",
""
],
[
"Wu",
"Yuduo",
""
],
[
"Yang",
"Carl",
""
],
[
"Wang",
"Leyuan",
""
],
[
"Osama",
"Muhammad",
""
],
[
"Yuan",
"Chenshan",
""
],
[
"Liu",
"Weitang",
""
],
[
"Riffel",
"Andy T.",
""
],
[
"Owens",
"John D.",
""
]
] |
For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs, have presented two significant challenges to developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We characterize the performance of various optimization strategies and evaluate Gunrock's overall performance on different GPU architectures on a wide range of graph primitives that span from traversal-based algorithms and ranking algorithms, to triangle counting and bipartite-graph-based algorithms. The results show that on a single GPU, Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives and CPU shared-memory graph libraries such as Ligra and Galois, and better performance than any other GPU high-level graph library.
|
2208.04022
|
Wen-Ji Zhou
|
Qianying Lin, Wen-Ji Zhou, Yanshi Wang, Qing Da, Qing-Guo Chen, Bing
Wang
|
Sparse Attentive Memory Network for Click-through Rate Prediction with
Long Sequences
|
Published as a conference paper at the 31st ACM International
Conference on Information and Knowledge Management, CIKM 2022
| null |
10.1145/3511808.3557095
| null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequential recommendation predicts users' next behaviors with their
historical interactions. Recommending with longer sequences improves
recommendation accuracy and increases the degree of personalization. As
sequences get longer, existing works have not yet addressed the following two
main challenges. Firstly, modeling long-range intra-sequence dependency is
difficult with increasing sequence lengths. Secondly, it requires efficient
memory and computational speeds. In this paper, we propose a Sparse Attentive
Memory (SAM) network for long sequential user behavior modeling. SAM supports
efficient training and real-time inference for user behavior sequences with
lengths on the scale of thousands. In SAM, we model the target item as the
query and the long sequence as the knowledge database, where the former
continuously elicits relevant information from the latter. SAM simultaneously
models target-sequence dependencies and long-range intra-sequence dependencies
with O(L) complexity and O(1) number of sequential updates, which can only be
achieved by the self-attention mechanism with O(L^2) complexity. Extensive
empirical results demonstrate that our proposed solution is effective not only
in long user behavior modeling but also on short sequences modeling.
Implemented on sequences of length 1000, SAM is successfully deployed on one of
the largest international E-commerce platforms. This inference time is within
30ms, with a substantial 7.30% click-through rate improvement for the online
A/B test. To the best of our knowledge, it is the first end-to-end long user
sequence modeling framework that models intra-sequence and target-sequence
dependencies with the aforementioned degree of efficiency and successfully
deployed on a large-scale real-time industrial recommender system.
|
[
{
"created": "Mon, 8 Aug 2022 10:11:46 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2022 06:17:49 GMT",
"version": "v2"
}
] |
2022-09-05
|
[
[
"Lin",
"Qianying",
""
],
[
"Zhou",
"Wen-Ji",
""
],
[
"Wang",
"Yanshi",
""
],
[
"Da",
"Qing",
""
],
[
"Chen",
"Qing-Guo",
""
],
[
"Wang",
"Bing",
""
]
] |
Sequential recommendation predicts users' next behaviors with their historical interactions. Recommending with longer sequences improves recommendation accuracy and increases the degree of personalization. As sequences get longer, existing works have not yet addressed the following two main challenges. Firstly, modeling long-range intra-sequence dependency is difficult with increasing sequence lengths. Secondly, it requires efficient memory and computational speeds. In this paper, we propose a Sparse Attentive Memory (SAM) network for long sequential user behavior modeling. SAM supports efficient training and real-time inference for user behavior sequences with lengths on the scale of thousands. In SAM, we model the target item as the query and the long sequence as the knowledge database, where the former continuously elicits relevant information from the latter. SAM simultaneously models target-sequence dependencies and long-range intra-sequence dependencies with O(L) complexity and O(1) number of sequential updates, which can only be achieved by the self-attention mechanism with O(L^2) complexity. Extensive empirical results demonstrate that our proposed solution is effective not only in long user behavior modeling but also on short sequences modeling. Implemented on sequences of length 1000, SAM is successfully deployed on one of the largest international E-commerce platforms. This inference time is within 30ms, with a substantial 7.30% click-through rate improvement for the online A/B test. To the best of our knowledge, it is the first end-to-end long user sequence modeling framework that models intra-sequence and target-sequence dependencies with the aforementioned degree of efficiency and successfully deployed on a large-scale real-time industrial recommender system.
|
2208.14571
|
Zhen Zhang
|
Zhen Zhang, Ignavier Ng, Dong Gong, Yuhang Liu, Ehsan M Abbasnejad,
Mingming Gong, Kun Zhang, Javen Qinfeng Shi
|
Truncated Matrix Power Iteration for Differentiable DAG Learning
|
Published in NeurIPS 2022
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recovering underlying Directed Acyclic Graph (DAG) structures from
observational data is highly challenging due to the combinatorial nature of the
DAG-constrained optimization problem. Recently, DAG learning has been cast as a
continuous optimization problem by characterizing the DAG constraint as a
smooth equality one, generally based on polynomials over adjacency matrices.
Existing methods place very small coefficients on high-order polynomial terms
for stabilization, since they argue that large coefficients on the higher-order
terms are harmful due to numeric exploding. On the contrary, we discover that
large coefficients on higher-order terms are beneficial for DAG learning, when
the spectral radiuses of the adjacency matrices are small, and that larger
coefficients for higher-order terms can approximate the DAG constraints much
better than the small counterparts. Based on this, we propose a novel DAG
learning method with efficient truncated matrix power iteration to approximate
geometric series based DAG constraints. Empirically, our DAG learning method
outperforms the previous state-of-the-arts in various settings, often by a
factor of $3$ or more in terms of structural Hamming distance.
|
[
{
"created": "Tue, 30 Aug 2022 23:56:12 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 03:21:04 GMT",
"version": "v2"
}
] |
2022-12-23
|
[
[
"Zhang",
"Zhen",
""
],
[
"Ng",
"Ignavier",
""
],
[
"Gong",
"Dong",
""
],
[
"Liu",
"Yuhang",
""
],
[
"Abbasnejad",
"Ehsan M",
""
],
[
"Gong",
"Mingming",
""
],
[
"Zhang",
"Kun",
""
],
[
"Shi",
"Javen Qinfeng",
""
]
] |
Recovering underlying Directed Acyclic Graph (DAG) structures from observational data is highly challenging due to the combinatorial nature of the DAG-constrained optimization problem. Recently, DAG learning has been cast as a continuous optimization problem by characterizing the DAG constraint as a smooth equality one, generally based on polynomials over adjacency matrices. Existing methods place very small coefficients on high-order polynomial terms for stabilization, since they argue that large coefficients on the higher-order terms are harmful due to numeric exploding. On the contrary, we discover that large coefficients on higher-order terms are beneficial for DAG learning, when the spectral radiuses of the adjacency matrices are small, and that larger coefficients for higher-order terms can approximate the DAG constraints much better than the small counterparts. Based on this, we propose a novel DAG learning method with efficient truncated matrix power iteration to approximate geometric series based DAG constraints. Empirically, our DAG learning method outperforms the previous state-of-the-arts in various settings, often by a factor of $3$ or more in terms of structural Hamming distance.
|
2011.06691
|
Fabien Racape
|
Franck Galpin, Fabien Racap\'e, Sunil Jaiswal, Philippe Bordes,
Fabrice Le L\'eannec, Edouard Fran\c{c}ois
|
CNN-based driving of block partitioning for intra slices encoding
|
10 pages
|
2019 Data Compression Conference (DCC)
|
10.1109/DCC.2019.00024
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides a technical overview of a deep-learning-based encoder
method aiming at optimizing next generation hybrid video encoders for driving
the block partitioning in intra slices. An encoding approach based on
Convolutional Neural Networks is explored to partly substitute classical
heuristics-based encoder speed-ups by a systematic and automatic process. The
solution allows controlling the trade-off between complexity and coding gains,
in intra slices, with one single parameter. This algorithm was proposed at the
Call for Proposals of the Joint Video Exploration Team (JVET) on video
compression with capability beyond HEVC. In All Intra configuration, for a
given allowed topology of splits, a speed-up of $\times 2$ is obtained without
BD-rate loss, or a speed-up above $\times 4$ with a loss below 1\% in BD-rate.
|
[
{
"created": "Thu, 12 Nov 2020 23:55:12 GMT",
"version": "v1"
}
] |
2020-11-16
|
[
[
"Galpin",
"Franck",
""
],
[
"Racapé",
"Fabien",
""
],
[
"Jaiswal",
"Sunil",
""
],
[
"Bordes",
"Philippe",
""
],
[
"Léannec",
"Fabrice Le",
""
],
[
"François",
"Edouard",
""
]
] |
This paper provides a technical overview of a deep-learning-based encoder method aiming at optimizing next generation hybrid video encoders for driving the block partitioning in intra slices. An encoding approach based on Convolutional Neural Networks is explored to partly substitute classical heuristics-based encoder speed-ups by a systematic and automatic process. The solution allows controlling the trade-off between complexity and coding gains, in intra slices, with one single parameter. This algorithm was proposed at the Call for Proposals of the Joint Video Exploration Team (JVET) on video compression with capability beyond HEVC. In All Intra configuration, for a given allowed topology of splits, a speed-up of $\times 2$ is obtained without BD-rate loss, or a speed-up above $\times 4$ with a loss below 1\% in BD-rate.
|
1605.04478
|
Hamid Tizhoosh
|
Mina Nouredanesh, Hamid R. Tizhoosh, Ershad Banijamali
|
Gabor Barcodes for Medical Image Retrieval
|
To appear in proceedings of The 2016 IEEE International Conference on
Image Processing (ICIP 2016), Sep 25-28, 2016, Phoenix, Arizona, USA
| null |
10.1109/ICIP.2016.7532807
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, advances in medical imaging have led to the emergence of
massive databases, containing images from a diverse range of modalities. This
has significantly heightened the need for automated annotation of the images on
one side, and fast and memory-efficient content-based image retrieval systems
on the other side. Binary descriptors have recently gained more attention as a
potential vehicle to achieve these goals. One of the recently introduced binary
descriptors for tagging of medical images are Radon barcodes (RBCs) that are
driven from Radon transform via local thresholding. Gabor transform is also a
powerful transform to extract texture-based information. Gabor features have
exhibited robustness against rotation, scale, and also photometric
disturbances, such as illumination changes and image noise in many
applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework
for the image annotation. To find the most discriminative GBC for a given query
image, the effects of employing Gabor filters with different parameters, i.e.,
different sets of scales and orientations, are investigated, resulting in
different barcode lengths and retrieval performances. The proposed method has
been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray
images for indexing, and 1,733 x-rays images for testing. A total error score
as low as $351$ ($\approx 80\%$ accuracy for the first hit) was achieved.
|
[
{
"created": "Sat, 14 May 2016 22:39:29 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Nouredanesh",
"Mina",
""
],
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Banijamali",
"Ershad",
""
]
] |
In recent years, advances in medical imaging have led to the emergence of massive databases, containing images from a diverse range of modalities. This has significantly heightened the need for automated annotation of the images on one side, and fast and memory-efficient content-based image retrieval systems on the other side. Binary descriptors have recently gained more attention as a potential vehicle to achieve these goals. One of the recently introduced binary descriptors for tagging of medical images are Radon barcodes (RBCs) that are driven from Radon transform via local thresholding. Gabor transform is also a powerful transform to extract texture-based information. Gabor features have exhibited robustness against rotation, scale, and also photometric disturbances, such as illumination changes and image noise in many applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework for the image annotation. To find the most discriminative GBC for a given query image, the effects of employing Gabor filters with different parameters, i.e., different sets of scales and orientations, are investigated, resulting in different barcode lengths and retrieval performances. The proposed method has been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray images for indexing, and 1,733 x-rays images for testing. A total error score as low as $351$ ($\approx 80\%$ accuracy for the first hit) was achieved.
|
2108.08768
|
Abdullatif Albaseer Mr
|
Abdullatif Albaseer, Mohamed Abdallah, Ala Al-Fuqaha, and Aiman Erbad
|
Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks
|
4 figures, 7 pages
| null | null | null |
cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Clustered Federated Multitask Learning (CFL) was introduced as an efficient
scheme to obtain reliable specialized models when data is imbalanced and
distributed in a non-i.i.d. (non-independent and identically distributed)
fashion amongst clients. While a similarity measure metric, like the cosine
similarity, can be used to endow groups of the client with a specialized model,
this process can be arduous as the server should involve all clients in each of
the federated learning rounds. Therefore, it is imperative that a subset of
clients is selected periodically due to the limited bandwidth and latency
constraints at the network edge. To this end, this paper proposes a new client
selection algorithm that aims to accelerate the convergence rate for obtaining
specialized machine learning models that achieve high test accuracies for all
client groups. Specifically, we introduce a client selection approach that
leverages the devices' heterogeneity to schedule the clients based on their
round latency and exploits the bandwidth reuse for clients that consume more
time to update the model. Then, the server performs model averaging and
clusters the clients based on predefined thresholds. When a specific cluster
reaches a stationary point, the proposed algorithm uses a greedy scheduling
algorithm for that group by selecting the clients with less latency to update
the model. Extensive experiments show that the proposed approach lowers the
training time and accelerates the convergence rate by up to 50% while imbuing
each client with a specialized model that is fit for its local data
distribution.
|
[
{
"created": "Mon, 16 Aug 2021 21:38:22 GMT",
"version": "v1"
}
] |
2021-08-20
|
[
[
"Albaseer",
"Abdullatif",
""
],
[
"Abdallah",
"Mohamed",
""
],
[
"Al-Fuqaha",
"Ala",
""
],
[
"Erbad",
"Aiman",
""
]
] |
Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models when data is imbalanced and distributed in a non-i.i.d. (non-independent and identically distributed) fashion amongst clients. While a similarity measure metric, like the cosine similarity, can be used to endow groups of the client with a specialized model, this process can be arduous as the server should involve all clients in each of the federated learning rounds. Therefore, it is imperative that a subset of clients is selected periodically due to the limited bandwidth and latency constraints at the network edge. To this end, this paper proposes a new client selection algorithm that aims to accelerate the convergence rate for obtaining specialized machine learning models that achieve high test accuracies for all client groups. Specifically, we introduce a client selection approach that leverages the devices' heterogeneity to schedule the clients based on their round latency and exploits the bandwidth reuse for clients that consume more time to update the model. Then, the server performs model averaging and clusters the clients based on predefined thresholds. When a specific cluster reaches a stationary point, the proposed algorithm uses a greedy scheduling algorithm for that group by selecting the clients with less latency to update the model. Extensive experiments show that the proposed approach lowers the training time and accelerates the convergence rate by up to 50% while imbuing each client with a specialized model that is fit for its local data distribution.
|
2101.09409
|
Shin-Cheng Mu
|
Shin-Cheng Mu
|
Calculating a backtracking algorithm: an exercise in monadic program
derivation
| null | null | null |
TR-IIS-19-003, Institute of Information Science, Academia Sinica
|
cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Equational reasoning is among the most important tools that functional
programming provides us. Curiously, relatively less attention has been paid to
reasoning about monadic programs.
In this report we derive a backtracking algorithm for problem specifications
that use a monadic unfold to generate possible solutions, which are filtered
using a $\mathit{scanl}$-like predicate. We develop theorems that convert a
variation of $\mathit{scanl}$ to a $\mathit{foldr}$ that uses the state monad,
as well as theorems constructing hylomorphism. The algorithm is used to solve
the $n$-queens puzzle, our running example. The aim is to develop theorems and
patterns useful for the derivation of monadic programs, focusing on the
intricate interaction between state and non-determinism.
|
[
{
"created": "Sat, 23 Jan 2021 03:27:20 GMT",
"version": "v1"
}
] |
2021-01-26
|
[
[
"Mu",
"Shin-Cheng",
""
]
] |
Equational reasoning is among the most important tools that functional programming provides us. Curiously, relatively less attention has been paid to reasoning about monadic programs. In this report we derive a backtracking algorithm for problem specifications that use a monadic unfold to generate possible solutions, which are filtered using a $\mathit{scanl}$-like predicate. We develop theorems that convert a variation of $\mathit{scanl}$ to a $\mathit{foldr}$ that uses the state monad, as well as theorems constructing hylomorphism. The algorithm is used to solve the $n$-queens puzzle, our running example. The aim is to develop theorems and patterns useful for the derivation of monadic programs, focusing on the intricate interaction between state and non-determinism.
|
2004.09141
|
Jimmy Wu
|
Jimmy Wu, Xingyuan Sun, Andy Zeng, Shuran Song, Johnny Lee, Szymon
Rusinkiewicz, Thomas Funkhouser
|
Spatial Action Maps for Mobile Manipulation
|
To appear at Robotics: Science and Systems (RSS), 2020. Project page:
https://spatial-action-maps.cs.princeton.edu
| null |
10.15607/RSS.2020.XVI.035
| null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Typical end-to-end formulations for learning robotic navigation involve
predicting a small set of steering command actions (e.g., step forward, turn
left, turn right, etc.) from images of the current state (e.g., a bird's-eye
view of a SLAM reconstruction). Instead, we show that it can be advantageous to
learn with dense action representations defined in the same domain as the
state. In this work, we present "spatial action maps," in which the set of
possible actions is represented by a pixel map (aligned with the input image of
the current state), where each pixel represents a local navigational endpoint
at the corresponding scene location. Using ConvNets to infer spatial action
maps from state images, action predictions are thereby spatially anchored on
local visual features in the scene, enabling significantly faster learning of
complex behaviors for mobile manipulation tasks with reinforcement learning. In
our experiments, we task a robot with pushing objects to a goal location, and
find that policies learned with spatial action maps achieve much better
performance than traditional alternatives.
|
[
{
"created": "Mon, 20 Apr 2020 09:06:10 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jun 2020 10:56:49 GMT",
"version": "v2"
}
] |
2020-10-13
|
[
[
"Wu",
"Jimmy",
""
],
[
"Sun",
"Xingyuan",
""
],
[
"Zeng",
"Andy",
""
],
[
"Song",
"Shuran",
""
],
[
"Lee",
"Johnny",
""
],
[
"Rusinkiewicz",
"Szymon",
""
],
[
"Funkhouser",
"Thomas",
""
]
] |
Typical end-to-end formulations for learning robotic navigation involve predicting a small set of steering command actions (e.g., step forward, turn left, turn right, etc.) from images of the current state (e.g., a bird's-eye view of a SLAM reconstruction). Instead, we show that it can be advantageous to learn with dense action representations defined in the same domain as the state. In this work, we present "spatial action maps," in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location. Using ConvNets to infer spatial action maps from state images, action predictions are thereby spatially anchored on local visual features in the scene, enabling significantly faster learning of complex behaviors for mobile manipulation tasks with reinforcement learning. In our experiments, we task a robot with pushing objects to a goal location, and find that policies learned with spatial action maps achieve much better performance than traditional alternatives.
|
2101.06398
|
JianYu Wang
|
Jianyu Wang, Shanzheng Guan, Shupei Liu, Xiao-Lei Zhang
|
Minimum-volume Multichannel Nonnegative matrix factorization for blind
source separation
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multichannel blind audio source separation aims to recover the latent sources
from their multichannel mixtures without supervised information. One
state-of-the-art blind audio source separation method, named independent
low-rank matrix analysis (ILRMA), unifies independent vector analysis (IVA) and
nonnegative matrix factorization (NMF). However, the spectra matrix produced
from NMF may not find a compact spectral basis. It may not guarantee the
identifiability of each source as well. To address this problem, here we
propose to enhance the identifiability of the source model by a minimum-volume
prior distribution. We further regularize a multichannel NMF (MNMF) and ILRMA
respectively with the minimum-volume regularizer. The proposed methods maximize
the posterior distribution of the separated sources, which ensures the
stability of the convergence. Experimental results demonstrate the
effectiveness of the proposed methods compared with auxiliary independent
vector analysis, MNMF, ILRMA and its extensions.
|
[
{
"created": "Sat, 16 Jan 2021 08:12:23 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Mar 2021 03:11:58 GMT",
"version": "v2"
}
] |
2021-03-31
|
[
[
"Wang",
"Jianyu",
""
],
[
"Guan",
"Shanzheng",
""
],
[
"Liu",
"Shupei",
""
],
[
"Zhang",
"Xiao-Lei",
""
]
] |
Multichannel blind audio source separation aims to recover the latent sources from their multichannel mixtures without supervised information. One state-of-the-art blind audio source separation method, named independent low-rank matrix analysis (ILRMA), unifies independent vector analysis (IVA) and nonnegative matrix factorization (NMF). However, the spectra matrix produced from NMF may not find a compact spectral basis. It may not guarantee the identifiability of each source as well. To address this problem, here we propose to enhance the identifiability of the source model by a minimum-volume prior distribution. We further regularize a multichannel NMF (MNMF) and ILRMA respectively with the minimum-volume regularizer. The proposed methods maximize the posterior distribution of the separated sources, which ensures the stability of the convergence. Experimental results demonstrate the effectiveness of the proposed methods compared with auxiliary independent vector analysis, MNMF, ILRMA and its extensions.
|
2104.09176
|
Stefan Zernetsch
|
Stefan Zernetsch, Hannes Reichert, Viktor Kress, Konrad Doll, Bernhard
Sick
|
Cyclist Intention Detection: A Probabilistic Approach
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a holistic approach for probabilistic cyclist intention
detection. A basic movement detection based on motion history images (MHI) and
a residual convolutional neural network (ResNet) are used to estimate
probabilities for the current cyclist motion state. These probabilities are
used as weights in a probabilistic ensemble trajectory forecast. The ensemble
consists of specialized models, which produce individual forecasts in the form
of Gaussian distributions under the assumption of a certain motion state of the
cyclist (e.g. cyclist is starting or turning left). By weighting the
specialized models, we create forecasts in the from of Gaussian mixtures that
define regions within which the cyclists will reside with a certain
probability. To evaluate our method, we rate the reliability, sharpness, and
positional accuracy of our forecasted distributions. We compare our method to a
single model approach which produces forecasts in the form of Gaussian
distributions and show that our method is able to produce more reliable and
sharper outputs while retaining comparable positional accuracy. Both methods
are evaluated using a dataset created at a public traffic intersection. Our
code and the dataset are made publicly available.
|
[
{
"created": "Mon, 19 Apr 2021 09:59:04 GMT",
"version": "v1"
}
] |
2021-04-20
|
[
[
"Zernetsch",
"Stefan",
""
],
[
"Reichert",
"Hannes",
""
],
[
"Kress",
"Viktor",
""
],
[
"Doll",
"Konrad",
""
],
[
"Sick",
"Bernhard",
""
]
] |
This article presents a holistic approach for probabilistic cyclist intention detection. A basic movement detection based on motion history images (MHI) and a residual convolutional neural network (ResNet) are used to estimate probabilities for the current cyclist motion state. These probabilities are used as weights in a probabilistic ensemble trajectory forecast. The ensemble consists of specialized models, which produce individual forecasts in the form of Gaussian distributions under the assumption of a certain motion state of the cyclist (e.g. cyclist is starting or turning left). By weighting the specialized models, we create forecasts in the from of Gaussian mixtures that define regions within which the cyclists will reside with a certain probability. To evaluate our method, we rate the reliability, sharpness, and positional accuracy of our forecasted distributions. We compare our method to a single model approach which produces forecasts in the form of Gaussian distributions and show that our method is able to produce more reliable and sharper outputs while retaining comparable positional accuracy. Both methods are evaluated using a dataset created at a public traffic intersection. Our code and the dataset are made publicly available.
|
2105.04112
|
Jun Yang
|
Jun Yang, Yizhou Gao, Dong Li, Steven L. Waslander
|
ROBI: A Multi-View Dataset for Reflective Objects in Robotic Bin-Picking
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In robotic bin-picking applications, the perception of texture-less, highly
reflective parts is a valuable but challenging task. The high glossiness can
introduce fake edges in RGB images and inaccurate depth measurements especially
in heavily cluttered bin scenario. In this paper, we present the ROBI
(Reflective Objects in BIns) dataset, a public dataset for 6D object pose
estimation and multi-view depth fusion in robotic bin-picking scenarios. The
ROBI dataset includes a total of 63 bin-picking scenes captured with two active
stereo camera: a high-cost Ensenso sensor and a low-cost RealSense sensor. For
each scene, the monochrome/RGB images and depth maps are captured from sampled
view spheres around the scene, and are annotated with accurate 6D poses of
visible objects and an associated visibility score. For evaluating the
performance of depth fusion, we captured the ground truth depth maps by
high-cost Ensenso camera with objects coated in anti-reflective scanning spray.
To show the utility of the dataset, we evaluated the representative algorithms
of 6D object pose estimation and multi-view depth fusion on the full dataset.
Evaluation results demonstrate the difficulty of highly reflective objects,
especially in difficult cases due to the degradation of depth data quality,
severe occlusions and cluttered scene. The ROBI dataset is available online at
https://www.trailab.utias.utoronto.ca/robi.
|
[
{
"created": "Mon, 10 May 2021 04:55:29 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Oct 2021 01:12:20 GMT",
"version": "v2"
}
] |
2021-10-08
|
[
[
"Yang",
"Jun",
""
],
[
"Gao",
"Yizhou",
""
],
[
"Li",
"Dong",
""
],
[
"Waslander",
"Steven L.",
""
]
] |
In robotic bin-picking applications, the perception of texture-less, highly reflective parts is a valuable but challenging task. The high glossiness can introduce fake edges in RGB images and inaccurate depth measurements especially in heavily cluttered bin scenario. In this paper, we present the ROBI (Reflective Objects in BIns) dataset, a public dataset for 6D object pose estimation and multi-view depth fusion in robotic bin-picking scenarios. The ROBI dataset includes a total of 63 bin-picking scenes captured with two active stereo camera: a high-cost Ensenso sensor and a low-cost RealSense sensor. For each scene, the monochrome/RGB images and depth maps are captured from sampled view spheres around the scene, and are annotated with accurate 6D poses of visible objects and an associated visibility score. For evaluating the performance of depth fusion, we captured the ground truth depth maps by high-cost Ensenso camera with objects coated in anti-reflective scanning spray. To show the utility of the dataset, we evaluated the representative algorithms of 6D object pose estimation and multi-view depth fusion on the full dataset. Evaluation results demonstrate the difficulty of highly reflective objects, especially in difficult cases due to the degradation of depth data quality, severe occlusions and cluttered scene. The ROBI dataset is available online at https://www.trailab.utias.utoronto.ca/robi.
|
2311.06728
|
Xiyue Gao
|
Xiyue Gao, Zhuang Liu, Jiangtao Cui, Hui Li, Hui Zhang, Kewei Wei,
Kankan Zhao
|
A Comprehensive Survey on Database Management System Fuzzing:
Techniques, Taxonomy and Experimental Comparison
|
34 pages, 22 figures
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Database Management System (DBMS) fuzzing is an automated testing technique
aimed at detecting errors and vulnerabilities in DBMSs by generating, mutating,
and executing test cases. It not only reduces the time and cost of manual
testing but also enhances detection coverage, providing valuable assistance in
developing commercial DBMSs. Existing fuzzing surveys mainly focus on
general-purpose software. However, DBMSs are different from them in terms of
internal structure, input/output, and test objectives, requiring specialized
fuzzing strategies. Therefore, this paper focuses on DBMS fuzzing and provides
a comprehensive review and comparison of the methods in this field. We first
introduce the fundamental concepts. Then, we systematically define a general
fuzzing procedure and decompose and categorize existing methods. Furthermore,
we classify existing methods from the testing objective perspective, covering
various components in DBMSs. For representative works, more detailed
descriptions are provided to analyze their strengths and limitations. To
objectively evaluate the performance of each method, we present an open-source
DBMS fuzzing toolkit, OpenDBFuzz. Based on this toolkit, we conduct a detailed
experimental comparative analysis of existing methods and finally discuss
future research directions.
|
[
{
"created": "Sun, 12 Nov 2023 04:18:03 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Gao",
"Xiyue",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Cui",
"Jiangtao",
""
],
[
"Li",
"Hui",
""
],
[
"Zhang",
"Hui",
""
],
[
"Wei",
"Kewei",
""
],
[
"Zhao",
"Kankan",
""
]
] |
Database Management System (DBMS) fuzzing is an automated testing technique aimed at detecting errors and vulnerabilities in DBMSs by generating, mutating, and executing test cases. It not only reduces the time and cost of manual testing but also enhances detection coverage, providing valuable assistance in developing commercial DBMSs. Existing fuzzing surveys mainly focus on general-purpose software. However, DBMSs are different from them in terms of internal structure, input/output, and test objectives, requiring specialized fuzzing strategies. Therefore, this paper focuses on DBMS fuzzing and provides a comprehensive review and comparison of the methods in this field. We first introduce the fundamental concepts. Then, we systematically define a general fuzzing procedure and decompose and categorize existing methods. Furthermore, we classify existing methods from the testing objective perspective, covering various components in DBMSs. For representative works, more detailed descriptions are provided to analyze their strengths and limitations. To objectively evaluate the performance of each method, we present an open-source DBMS fuzzing toolkit, OpenDBFuzz. Based on this toolkit, we conduct a detailed experimental comparative analysis of existing methods and finally discuss future research directions.
|
2401.09410
|
Bhaskar Mitra
|
Karina Corti\~nas-Lorenzo, Si\^an Lindley, Ida Larsen-Ledet and
Bhaskar Mitra
|
Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems
| null | null | null | null |
cs.CY cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge can't be disentangled from people. As AI knowledge systems mine
vast volumes of work-related data, the knowledge that's being extracted and
surfaced is intrinsically linked to the people who create and use it. When
these systems get embedded in organizational settings, the information that is
brought to the foreground and the information that's pushed to the periphery
can influence how individuals see each other and how they see themselves at
work. In this paper, we present the looking-glass metaphor and use it to
conceptualize AI knowledge systems as systems that reflect and distort,
expanding our view on transparency requirements, implications and challenges.
We formulate transparency as a key mediator in shaping different ways of
seeing, including seeing into the system, which unveils its capabilities,
limitations and behavior, and seeing through the system, which shapes workers'
perceptions of their own contributions and others within the organization.
Recognizing the sociotechnical nature of these systems, we identify three
transparency dimensions necessary to realize the value of AI knowledge systems,
namely system transparency, procedural transparency and transparency of
outcomes. We discuss key challenges hindering the implementation of these forms
of transparency, bringing to light the wider sociotechnical gap and
highlighting directions for future Computer-supported Cooperative Work (CSCW)
research.
|
[
{
"created": "Wed, 17 Jan 2024 18:47:30 GMT",
"version": "v1"
}
] |
2024-01-18
|
[
[
"Cortiñas-Lorenzo",
"Karina",
""
],
[
"Lindley",
"Siân",
""
],
[
"Larsen-Ledet",
"Ida",
""
],
[
"Mitra",
"Bhaskar",
""
]
] |
Knowledge can't be disentangled from people. As AI knowledge systems mine vast volumes of work-related data, the knowledge that's being extracted and surfaced is intrinsically linked to the people who create and use it. When these systems get embedded in organizational settings, the information that is brought to the foreground and the information that's pushed to the periphery can influence how individuals see each other and how they see themselves at work. In this paper, we present the looking-glass metaphor and use it to conceptualize AI knowledge systems as systems that reflect and distort, expanding our view on transparency requirements, implications and challenges. We formulate transparency as a key mediator in shaping different ways of seeing, including seeing into the system, which unveils its capabilities, limitations and behavior, and seeing through the system, which shapes workers' perceptions of their own contributions and others within the organization. Recognizing the sociotechnical nature of these systems, we identify three transparency dimensions necessary to realize the value of AI knowledge systems, namely system transparency, procedural transparency and transparency of outcomes. We discuss key challenges hindering the implementation of these forms of transparency, bringing to light the wider sociotechnical gap and highlighting directions for future Computer-supported Cooperative Work (CSCW) research.
|
1902.06950
|
Dominic Orchard
|
Li-yao Xia, Dominic Orchard, Meng Wang
|
Composing bidirectional programs monadically (with appendices)
|
Provides the appendices of the paper, which appears in the
proceedings of European Symposium on Programming (ESOP) 2019
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Software frequently converts data from one representation to another and vice
versa. Naively specifying both conversion directions separately is error prone
and introduces conceptual duplication. Instead, bidirectional programming
techniques allow programs to be written which can be interpreted in both
directions. However, these techniques often employ unfamiliar programming
idioms via restricted, specialised combinator libraries. Instead, we introduce
a framework for composing bidirectional programs monadically, enabling
bidirectional programming with familiar abstractions in functional languages
such as Haskell. We demonstrate the generality of our approach applied to
parsers/printers, lenses, and generators/predicates. We show how to leverage
compositionality and equational reasoning for the verification of
round-tripping properties for such monadic bidirectional programs.
|
[
{
"created": "Tue, 19 Feb 2019 08:47:23 GMT",
"version": "v1"
}
] |
2019-02-20
|
[
[
"Xia",
"Li-yao",
""
],
[
"Orchard",
"Dominic",
""
],
[
"Wang",
"Meng",
""
]
] |
Software frequently converts data from one representation to another and vice versa. Naively specifying both conversion directions separately is error prone and introduces conceptual duplication. Instead, bidirectional programming techniques allow programs to be written which can be interpreted in both directions. However, these techniques often employ unfamiliar programming idioms via restricted, specialised combinator libraries. Instead, we introduce a framework for composing bidirectional programs monadically, enabling bidirectional programming with familiar abstractions in functional languages such as Haskell. We demonstrate the generality of our approach applied to parsers/printers, lenses, and generators/predicates. We show how to leverage compositionality and equational reasoning for the verification of round-tripping properties for such monadic bidirectional programs.
|
1309.0752
|
Dr. Nadeem Javaid
|
N. Javaid, O. Rehman, N. Alrajeh, Z. A. Khan, B. Manzoor, S. Ahmed
|
AID: An Energy Efficient Decoding Scheme for LDPC Codes in Wireless Body
Area Sensor Networks
|
2013 International Workshop on Communications and Sensor Networks
(ComSense-2013), Niagara Falls, Ontario, Canada on October 21-24, 2013 in
conjunction with 4th International Conference on Emerging Ubiquitous Systems
and Pervasive Networks (EUSPN-2013)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the major challenges in Wireless Body Area Networks (WBANs) is to
prolong the lifetime of network. Traditional research work focuses on
minimizing transmit power, however, in the case of short range communication
the consumption power in decoding is significantly larger than transmit power.
This paper investigates the minimization of total power consumption by reducing
the decoding power consumption. For achieving a desired Bit Error Rate (BER),
we introduce some fundamental results on the basis of iterative message-passing
algorithms for Low Density Parity Check Code (LDPC). To reduce energy
dissipation in decoder, LDPC based coded communications between sensors are
considered. Moreover, we evaluate the performance of LDPC at different code
rates and introduce Adaptive Iterative Decoding (AID) by exploiting threshold
on the number of iterations for a certain BER. In iterative LDPC decoding, the
total energy consumption of network is reduced by 20 to 25 percent.
|
[
{
"created": "Tue, 3 Sep 2013 17:30:48 GMT",
"version": "v1"
}
] |
2013-09-04
|
[
[
"Javaid",
"N.",
""
],
[
"Rehman",
"O.",
""
],
[
"Alrajeh",
"N.",
""
],
[
"Khan",
"Z. A.",
""
],
[
"Manzoor",
"B.",
""
],
[
"Ahmed",
"S.",
""
]
] |
One of the major challenges in Wireless Body Area Networks (WBANs) is to prolong the lifetime of network. Traditional research work focuses on minimizing transmit power, however, in the case of short range communication the consumption power in decoding is significantly larger than transmit power. This paper investigates the minimization of total power consumption by reducing the decoding power consumption. For achieving a desired Bit Error Rate (BER), we introduce some fundamental results on the basis of iterative message-passing algorithms for Low Density Parity Check Code (LDPC). To reduce energy dissipation in decoder, LDPC based coded communications between sensors are considered. Moreover, we evaluate the performance of LDPC at different code rates and introduce Adaptive Iterative Decoding (AID) by exploiting threshold on the number of iterations for a certain BER. In iterative LDPC decoding, the total energy consumption of network is reduced by 20 to 25 percent.
|
2310.00535
|
Yuandong Tian
|
Yuandong Tian, Yiping Wang, Zhenyu Zhang, Beidi Chen, Simon Du
|
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and
Attention
|
ICLR'24 camera ready. Improve theorem 3 and theorem 4. Polish writing
and add code link
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical
framework to understand the training procedure of multilayer Transformer
architectures. This is achieved by integrating out the self-attention layer in
Transformers, producing a modified dynamics of MLP layers only. JoMA removes
unrealistic assumptions in previous analysis (e.g., lack of residual
connection) and predicts that the attention first becomes sparse (to learn
salient tokens), then dense (to learn less salient tokens) in the presence of
nonlinear activations, while in the linear case, it is consistent with existing
works that show attention becomes sparse over time. We leverage JoMA to
qualitatively explains how tokens are combined to form hierarchies in
multilayer Transformers, when the input tokens are generated by a latent
hierarchical generative model. Experiments on models trained from real-world
dataset (Wikitext2/Wikitext103) and various pre-trained models (OPT, Pythia)
verify our theoretical findings. Code can be found in
https://github.com/facebookresearch/luckmatters/tree/yuandong3.
|
[
{
"created": "Sun, 1 Oct 2023 01:21:35 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Oct 2023 04:23:26 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Mar 2024 02:03:21 GMT",
"version": "v3"
}
] |
2024-03-18
|
[
[
"Tian",
"Yuandong",
""
],
[
"Wang",
"Yiping",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Chen",
"Beidi",
""
],
[
"Du",
"Simon",
""
]
] |
We propose Joint MLP/Attention (JoMA) dynamics, a novel mathematical framework to understand the training procedure of multilayer Transformer architectures. This is achieved by integrating out the self-attention layer in Transformers, producing a modified dynamics of MLP layers only. JoMA removes unrealistic assumptions in previous analysis (e.g., lack of residual connection) and predicts that the attention first becomes sparse (to learn salient tokens), then dense (to learn less salient tokens) in the presence of nonlinear activations, while in the linear case, it is consistent with existing works that show attention becomes sparse over time. We leverage JoMA to qualitatively explains how tokens are combined to form hierarchies in multilayer Transformers, when the input tokens are generated by a latent hierarchical generative model. Experiments on models trained from real-world dataset (Wikitext2/Wikitext103) and various pre-trained models (OPT, Pythia) verify our theoretical findings. Code can be found in https://github.com/facebookresearch/luckmatters/tree/yuandong3.
|
2407.19402
|
Xihua Sheng
|
Xihua Sheng, Chuanbo Tang, Li Li, Dong Liu, Feng Wu
|
NVC-1B: A Large Neural Video Coding Model
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging large models have achieved notable progress in the fields of
natural language processing and computer vision. However, large models for
neural video coding are still unexplored. In this paper, we try to explore how
to build a large neural video coding model. Based on a small baseline model, we
gradually scale up the model sizes of its different coding parts, including the
motion encoder-decoder, motion entropy model, contextual encoder-decoder,
contextual entropy model, and temporal context mining module, and analyze the
influence of model sizes on video compression performance. Then, we explore to
use different architectures, including CNN, mixed CNN-Transformer, and
Transformer architectures, to implement the neural video coding model and
analyze the influence of model architectures on video compression performance.
Based on our exploration results, we design the first neural video coding model
with more than 1 billion parameters -- NVC-1B. Experimental results show that
our proposed large model achieves a significant video compression performance
improvement over the small baseline model, and represents the state-of-the-art
compression efficiency. We anticipate large models may bring up the video
coding technologies to the next level.
|
[
{
"created": "Sun, 28 Jul 2024 05:12:22 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Sheng",
"Xihua",
""
],
[
"Tang",
"Chuanbo",
""
],
[
"Li",
"Li",
""
],
[
"Liu",
"Dong",
""
],
[
"Wu",
"Feng",
""
]
] |
The emerging large models have achieved notable progress in the fields of natural language processing and computer vision. However, large models for neural video coding are still unexplored. In this paper, we try to explore how to build a large neural video coding model. Based on a small baseline model, we gradually scale up the model sizes of its different coding parts, including the motion encoder-decoder, motion entropy model, contextual encoder-decoder, contextual entropy model, and temporal context mining module, and analyze the influence of model sizes on video compression performance. Then, we explore to use different architectures, including CNN, mixed CNN-Transformer, and Transformer architectures, to implement the neural video coding model and analyze the influence of model architectures on video compression performance. Based on our exploration results, we design the first neural video coding model with more than 1 billion parameters -- NVC-1B. Experimental results show that our proposed large model achieves a significant video compression performance improvement over the small baseline model, and represents the state-of-the-art compression efficiency. We anticipate large models may bring up the video coding technologies to the next level.
|
2208.02504
|
Usman Akhtar Dr
|
Usman Akhtar, Rafal Kucharski
|
Exploring Computational Complexity Of Ride-Pooling Problems
|
13 pages, 7 figures, Submitted to The Transportation Research Board
(TRB), Annual Meeting (102nd)
| null | null | null |
cs.DS cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ride-pooling is computationally challenging. The number of feasible rides
grows with the number of travelers and the degree (capacity of the vehicle to
perform a pooled ride) and quickly explodes to the sizes making the problem not
solvable analytically. In practice, heuristics are applied to limit the number
of searches, e.g., maximal detour and delay, or (like we use in this study)
attractive rides (for which detour and delay are at least compensated with the
discount).
Nevertheless, the challenge to solve the ride-pooling remains strongly
sensitive to the problem settings. Here, we explore it in more detail and
provide an experimental underpinning to this open research problem. We trace
how the size of the search space and computation time needed to solve the
ride-pooling problem grows with the increasing demand and greater discounts
offered for pooling. We run over 100 practical experiments in Amsterdam with
10-minute batches of trip requests up to 3600 trips per hour and trace how
challenging it is to propose the solution to the pooling problem with our ExMAS
algorithm.
We observed strong, non-linear trends and identified the limits beyond which
the problem exploded and our algorithm failed to compute. Notably, we found
that the demand level (number of trip requests) is less critical than the
discount. The search space grows exponentially and quickly reaches huge levels.
However, beyond some level, the greater size of the ride-pooling problem does
not translate into greater efficiency of pooling. Which opens the opportunity
for further search space reductions.
|
[
{
"created": "Thu, 4 Aug 2022 07:30:30 GMT",
"version": "v1"
}
] |
2022-08-05
|
[
[
"Akhtar",
"Usman",
""
],
[
"Kucharski",
"Rafal",
""
]
] |
Ride-pooling is computationally challenging. The number of feasible rides grows with the number of travelers and the degree (capacity of the vehicle to perform a pooled ride) and quickly explodes to the sizes making the problem not solvable analytically. In practice, heuristics are applied to limit the number of searches, e.g., maximal detour and delay, or (like we use in this study) attractive rides (for which detour and delay are at least compensated with the discount). Nevertheless, the challenge to solve the ride-pooling remains strongly sensitive to the problem settings. Here, we explore it in more detail and provide an experimental underpinning to this open research problem. We trace how the size of the search space and computation time needed to solve the ride-pooling problem grows with the increasing demand and greater discounts offered for pooling. We run over 100 practical experiments in Amsterdam with 10-minute batches of trip requests up to 3600 trips per hour and trace how challenging it is to propose the solution to the pooling problem with our ExMAS algorithm. We observed strong, non-linear trends and identified the limits beyond which the problem exploded and our algorithm failed to compute. Notably, we found that the demand level (number of trip requests) is less critical than the discount. The search space grows exponentially and quickly reaches huge levels. However, beyond some level, the greater size of the ride-pooling problem does not translate into greater efficiency of pooling. Which opens the opportunity for further search space reductions.
|
2008.12735
|
Donald Honeycutt
|
Donald R. Honeycutt, Mahsan Nourani, Eric D. Ragan
|
Soliciting Human-in-the-Loop User Feedback for Interactive Machine
Learning Reduces User Trust and Impressions of Model Accuracy
|
Accepted and to appear in the Proceedings of the AAAI Conference on
Human Computation and Crowdsourcing (HCOMP) 2020
| null | null | null |
cs.HC cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mixed-initiative systems allow users to interactively provide feedback to
potentially improve system performance. Human feedback can correct model errors
and update model parameters to dynamically adapt to changing data.
Additionally, many users desire the ability to have a greater level of control
and fix perceived flaws in systems they rely on. However, how the ability to
provide feedback to autonomous systems influences user trust is a largely
unexplored area of research. Our research investigates how the act of providing
feedback can affect user understanding of an intelligent system and its
accuracy. We present a controlled experiment using a simulated object detection
system with image data to study the effects of interactive feedback collection
on user impressions. The results show that providing human-in-the-loop feedback
lowered both participants' trust in the system and their perception of system
accuracy, regardless of whether the system accuracy improved in response to
their feedback. These results highlight the importance of considering the
effects of allowing end-user feedback on user trust when designing intelligent
systems.
|
[
{
"created": "Fri, 28 Aug 2020 16:46:41 GMT",
"version": "v1"
}
] |
2020-08-31
|
[
[
"Honeycutt",
"Donald R.",
""
],
[
"Nourani",
"Mahsan",
""
],
[
"Ragan",
"Eric D.",
""
]
] |
Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants' trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.
|
1802.00304
|
Malika Bendechache
|
Malika Bendechache and M-Tahar Kechadi
|
Distributed Clustering Algorithm for Spatial Data Mining
|
6 pages. arXiv admin note: text overlap with arXiv:1704.03421
|
Spatial Data Mining and Geographical Knowledge Services (ICSDM),
2015 2nd IEEE International Conference on, pages 60--65, 2015
|
10.1109/ICSDM.2015.7298026
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed data mining techniques and mainly distributed clustering are
widely used in the last decade because they deal with very large and
heterogeneous datasets which cannot be gathered centrally. Current distributed
clustering approaches are normally generating global models by aggregating
local results that are obtained on each site. While this approach mines the
datasets on their locations the aggregation phase is complex, which may produce
incorrect and ambiguous global clusters and therefore incorrect knowledge. In
this paper we propose a new clustering approach for very large spatial datasets
that are heterogeneous and distributed. The approach is based on K-means
Algorithm but it generates the number of global clusters dynamically. Moreover,
this approach uses an elaborated aggregation phase. The aggregation phase is
designed in such a way that the overall process is efficient in time and memory
allocation. Preliminary results show that the proposed approach produces high
quality results and scales up well. We also compared it to two popular
clustering algorithms and show that this approach is much more efficient.
|
[
{
"created": "Thu, 1 Feb 2018 14:41:33 GMT",
"version": "v1"
}
] |
2018-02-02
|
[
[
"Bendechache",
"Malika",
""
],
[
"Kechadi",
"M-Tahar",
""
]
] |
Distributed data mining techniques and mainly distributed clustering are widely used in the last decade because they deal with very large and heterogeneous datasets which cannot be gathered centrally. Current distributed clustering approaches are normally generating global models by aggregating local results that are obtained on each site. While this approach mines the datasets on their locations the aggregation phase is complex, which may produce incorrect and ambiguous global clusters and therefore incorrect knowledge. In this paper we propose a new clustering approach for very large spatial datasets that are heterogeneous and distributed. The approach is based on K-means Algorithm but it generates the number of global clusters dynamically. Moreover, this approach uses an elaborated aggregation phase. The aggregation phase is designed in such a way that the overall process is efficient in time and memory allocation. Preliminary results show that the proposed approach produces high quality results and scales up well. We also compared it to two popular clustering algorithms and show that this approach is much more efficient.
|
2109.02682
|
Abdulaziz Alaboudi
|
Abdulaziz Alaboudi, Thomas D. LaToza
|
Edit-Run Behavior in Programming and Debugging
|
VL/HCC 2021
| null | null | null |
cs.SE cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
As developers program and debug, they continuously edit and run their code, a
behavior known as edit-run cycles. While techniques such as live programming
are intended to support this behavior, little is known about the
characteristics of edit-run cycles themselves. To bridge this gap, we analyzed
28 hours of programming and debugging work from 11 professional developers
which encompassed over three thousand development activities. We mapped
activities to edit or run steps, constructing 581 debugging and 207 programming
edit-run cycles. We found that edit-run cycles are frequent. Developers edit
and run the program, on average, 7 times before fixing a defect and twice
before introducing a defect. Developers waited longer before again running the
program when programming than debugging, with a mean cycle length of 3 minutes
for programming and 1 minute for debugging. Most cycles involved an edit to a
single file after which a developer ran the program to observe the impact on
the final output. Edit-run cycles which included activities beyond edit and
run, such as navigating between files, consulting resources, or interacting
with other IDE features, were much longer, with a mean length of 5 minutes,
rather than 1.5 minutes. We conclude with a discussion of design
recommendations for tools to enable more fluidity in edit-run cycles.
|
[
{
"created": "Mon, 6 Sep 2021 18:06:01 GMT",
"version": "v1"
}
] |
2021-09-08
|
[
[
"Alaboudi",
"Abdulaziz",
""
],
[
"LaToza",
"Thomas D.",
""
]
] |
As developers program and debug, they continuously edit and run their code, a behavior known as edit-run cycles. While techniques such as live programming are intended to support this behavior, little is known about the characteristics of edit-run cycles themselves. To bridge this gap, we analyzed 28 hours of programming and debugging work from 11 professional developers which encompassed over three thousand development activities. We mapped activities to edit or run steps, constructing 581 debugging and 207 programming edit-run cycles. We found that edit-run cycles are frequent. Developers edit and run the program, on average, 7 times before fixing a defect and twice before introducing a defect. Developers waited longer before again running the program when programming than debugging, with a mean cycle length of 3 minutes for programming and 1 minute for debugging. Most cycles involved an edit to a single file after which a developer ran the program to observe the impact on the final output. Edit-run cycles which included activities beyond edit and run, such as navigating between files, consulting resources, or interacting with other IDE features, were much longer, with a mean length of 5 minutes, rather than 1.5 minutes. We conclude with a discussion of design recommendations for tools to enable more fluidity in edit-run cycles.
|
1902.10030
|
Hussam Qassim Mr.
|
Hussein A. Al-Barazanchi, Hussam Qassim, David Feinzimer, and Abhishek
Verma
|
Residual-CNDS for Grand Challenge Scene Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasing depth of convolutional neural networks (CNNs) is a highly
promising method of increasing the accuracy of the (CNNs). Increased CNN depth
will also result in increased layer count (parameters), leading to a slow
backpropagation convergence prone to overfitting. We trained our model
(Residual-CNDS) to classify very large-scale scene datasets MIT Places 205, and
MIT Places 365-Standard. The outcome result from the two datasets proved our
proposed model (Residual-CNDS) effectively handled the slow convergence,
overfitting, and degradation. CNNs that include deep supervision (CNDS) add
supplementary branches to the deep convolutional neural network in specified
layers by calculating vanishing, effectively addressing delayed convergence and
overfitting. Nevertheless, (CNDS) does not resolve degradation; hence, we add
residual learning to the (CNDS) in certain layers after studying the best place
in which to add it. With this approach we overcome degradation in the very deep
network. We have built two models (Residual-CNDS 8), and (Residual-CNDS 10).
Moreover, we tested our models on two large-scale datasets, and we compared our
results with other recently introduced cutting-edge networks in the domain of
top-1 and top-5 classification accuracy. As a result, both of models have shown
good improvement, which supports the assertion that the addition of residual
connections enhances network CNDS accuracy without adding any computation
complexity.
|
[
{
"created": "Sun, 13 Jan 2019 23:00:11 GMT",
"version": "v1"
}
] |
2019-02-27
|
[
[
"Al-Barazanchi",
"Hussein A.",
""
],
[
"Qassim",
"Hussam",
""
],
[
"Feinzimer",
"David",
""
],
[
"Verma",
"Abhishek",
""
]
] |
Increasing depth of convolutional neural networks (CNNs) is a highly promising method of increasing the accuracy of the (CNNs). Increased CNN depth will also result in increased layer count (parameters), leading to a slow backpropagation convergence prone to overfitting. We trained our model (Residual-CNDS) to classify very large-scale scene datasets MIT Places 205, and MIT Places 365-Standard. The outcome result from the two datasets proved our proposed model (Residual-CNDS) effectively handled the slow convergence, overfitting, and degradation. CNNs that include deep supervision (CNDS) add supplementary branches to the deep convolutional neural network in specified layers by calculating vanishing, effectively addressing delayed convergence and overfitting. Nevertheless, (CNDS) does not resolve degradation; hence, we add residual learning to the (CNDS) in certain layers after studying the best place in which to add it. With this approach we overcome degradation in the very deep network. We have built two models (Residual-CNDS 8), and (Residual-CNDS 10). Moreover, we tested our models on two large-scale datasets, and we compared our results with other recently introduced cutting-edge networks in the domain of top-1 and top-5 classification accuracy. As a result, both of models have shown good improvement, which supports the assertion that the addition of residual connections enhances network CNDS accuracy without adding any computation complexity.
|
2302.08624
|
Kevin Scaria
|
Kevin Scaria and Himanshu Gupta and Siddharth Goyal and Saurabh Arjun
Sawant and Swaroop Mishra and Chitta Baral
|
InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis
|
4 pages, 3 figures, 9 tables, 9 appendix pages
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce InstructABSA, an instruction learning paradigm for Aspect-Based
Sentiment Analysis (ABSA) subtasks. Our method introduces positive, negative,
and neutral examples to each training sample, and instruction tune the model
(Tk-Instruct) for ABSA subtasks, yielding significant performance improvements.
Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that
InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on
Term Extraction (ATE), Sentiment Classification(ATSC) and Sentiment Pair
Extraction (ASPE) subtasks. In particular, InstructABSA outperforms the
previous state-of-the-art (SOTA) on the Rest14 ATE subtask by 5.69% points, the
Rest15 ATSC subtask by 9.59% points, and the Lapt14 AOPE subtask by 3.37%
points, surpassing 7x larger models. We also get competitive results on AOOE,
AOPE, and AOSTE subtasks indicating strong generalization ability to all
subtasks. Exploring sample efficiency reveals that just 50% train data is
required to get competitive results with other instruction tuning approaches.
Lastly, we assess the quality of instructions and observe that InstructABSA's
performance experiences a decline of ~10% when adding misleading examples.
|
[
{
"created": "Thu, 16 Feb 2023 23:29:22 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Feb 2023 06:53:41 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Apr 2023 04:44:43 GMT",
"version": "v3"
},
{
"created": "Thu, 20 Apr 2023 05:57:12 GMT",
"version": "v4"
},
{
"created": "Thu, 25 May 2023 02:13:10 GMT",
"version": "v5"
},
{
"created": "Mon, 13 Nov 2023 17:56:19 GMT",
"version": "v6"
}
] |
2023-11-14
|
[
[
"Scaria",
"Kevin",
""
],
[
"Gupta",
"Himanshu",
""
],
[
"Goyal",
"Siddharth",
""
],
[
"Sawant",
"Saurabh Arjun",
""
],
[
"Mishra",
"Swaroop",
""
],
[
"Baral",
"Chitta",
""
]
] |
We introduce InstructABSA, an instruction learning paradigm for Aspect-Based Sentiment Analysis (ABSA) subtasks. Our method introduces positive, negative, and neutral examples to each training sample, and instruction tune the model (Tk-Instruct) for ABSA subtasks, yielding significant performance improvements. Experimental results on the Sem Eval 2014, 15, and 16 datasets demonstrate that InstructABSA outperforms the previous state-of-the-art (SOTA) approaches on Term Extraction (ATE), Sentiment Classification(ATSC) and Sentiment Pair Extraction (ASPE) subtasks. In particular, InstructABSA outperforms the previous state-of-the-art (SOTA) on the Rest14 ATE subtask by 5.69% points, the Rest15 ATSC subtask by 9.59% points, and the Lapt14 AOPE subtask by 3.37% points, surpassing 7x larger models. We also get competitive results on AOOE, AOPE, and AOSTE subtasks indicating strong generalization ability to all subtasks. Exploring sample efficiency reveals that just 50% train data is required to get competitive results with other instruction tuning approaches. Lastly, we assess the quality of instructions and observe that InstructABSA's performance experiences a decline of ~10% when adding misleading examples.
|
2105.11927
|
Chong Peng
|
Yang Liu, Qian Zhang, Yongyong Chen, Qiang Cheng and Chong Peng
|
Hyperspectral Image Denoising with Log-Based Robust PCA
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
It is a challenging task to remove heavy and mixed types of noise from
Hyperspectral images (HSIs). In this paper, we propose a novel nonconvex
approach to RPCA for HSI denoising, which adopts the log-determinant rank
approximation and a novel $\ell_{2,\log}$ norm, to restrict the low-rank or
column-wise sparse properties for the component matrices, respectively.For the
$\ell_{2,\log}$-regularized shrinkage problem, we develop an efficient,
closed-form solution, which is named $\ell_{2,\log}$-shrinkage operator, which
can be generally used in other problems. Extensive experiments on both
simulated and real HSIs demonstrate the effectiveness of the proposed method in
denoising HSIs.
|
[
{
"created": "Tue, 25 May 2021 13:32:01 GMT",
"version": "v1"
}
] |
2021-05-26
|
[
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Qian",
""
],
[
"Chen",
"Yongyong",
""
],
[
"Cheng",
"Qiang",
""
],
[
"Peng",
"Chong",
""
]
] |
It is a challenging task to remove heavy and mixed types of noise from Hyperspectral images (HSIs). In this paper, we propose a novel nonconvex approach to RPCA for HSI denoising, which adopts the log-determinant rank approximation and a novel $\ell_{2,\log}$ norm, to restrict the low-rank or column-wise sparse properties for the component matrices, respectively.For the $\ell_{2,\log}$-regularized shrinkage problem, we develop an efficient, closed-form solution, which is named $\ell_{2,\log}$-shrinkage operator, which can be generally used in other problems. Extensive experiments on both simulated and real HSIs demonstrate the effectiveness of the proposed method in denoising HSIs.
|
2205.08383
|
Matheus Schmitz
|
Matheus Schmitz, Rehan Ahmed, Jimi Cao
|
Bias and Fairness on Multimodal Emotion Detection Algorithms
| null | null |
10.13140/RG.2.2.14341.01769
| null |
cs.LG cs.AI cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Numerous studies have shown that machine learning algorithms can latch onto
protected attributes such as race and gender and generate predictions that
systematically discriminate against one or more groups. To date the majority of
bias and fairness research has been on unimodal models. In this work, we
explore the biases that exist in emotion recognition systems in relationship to
the modalities utilized, and study how multimodal approaches affect system bias
and fairness. We consider audio, text, and video modalities, as well as all
possible multimodal combinations of those, and find that text alone has the
least bias, and accounts for the majority of the models' performances, raising
doubts about the worthiness of multimodal emotion recognition systems when bias
and fairness are desired alongside model performance.
|
[
{
"created": "Wed, 11 May 2022 20:03:25 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Schmitz",
"Matheus",
""
],
[
"Ahmed",
"Rehan",
""
],
[
"Cao",
"Jimi",
""
]
] |
Numerous studies have shown that machine learning algorithms can latch onto protected attributes such as race and gender and generate predictions that systematically discriminate against one or more groups. To date the majority of bias and fairness research has been on unimodal models. In this work, we explore the biases that exist in emotion recognition systems in relationship to the modalities utilized, and study how multimodal approaches affect system bias and fairness. We consider audio, text, and video modalities, as well as all possible multimodal combinations of those, and find that text alone has the least bias, and accounts for the majority of the models' performances, raising doubts about the worthiness of multimodal emotion recognition systems when bias and fairness are desired alongside model performance.
|
2406.19106
|
Francesco Cambria
|
Francesco Cambria, Francesco Invernici, Anna Bernasconi and Stefano
Ceri
|
MINE GRAPH RULE: A New Cypher-like Operator for Mining Association Rules
on Property Graphs
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Mining information from graph databases is becoming overly important. To
approach this problem, current methods focus on identifying subgraphs with
specific topologies; as of today, no work has been focused on expressing
jointly the syntax and semantics of mining operations over rich property
graphs. We define MINE GRAPH RULE, a new operator for mining association rules
from graph databases, by extending classical approaches used in relational
databases and exploited by recommending systems. We describe the syntax and
semantics of the operator, which is based on measuring the support and
confidence of each rule, and then we provide several examples of increasing
complexity on top of a realistic example; our operator embeds Cypher for
expressing the mining conditions. MINE GRAPH RULE is implemented on top of
Neo4j, the most successful graph database system; it takes advantage of
built-in optimizations of the Neo4j engine, as well as optimizations that are
defined in the context of relational association rules. Our implementation is
available as a portable Neo4j plugin. At the end of our paper, we show the
execution performance in a variety of settings, by varying the operators, the
size of the graph, the ratio between node types, the method for creating
relationships, and maximum support and confidence.
|
[
{
"created": "Thu, 27 Jun 2024 11:33:16 GMT",
"version": "v1"
}
] |
2024-06-28
|
[
[
"Cambria",
"Francesco",
""
],
[
"Invernici",
"Francesco",
""
],
[
"Bernasconi",
"Anna",
""
],
[
"Ceri",
"Stefano",
""
]
] |
Mining information from graph databases is becoming overly important. To approach this problem, current methods focus on identifying subgraphs with specific topologies; as of today, no work has been focused on expressing jointly the syntax and semantics of mining operations over rich property graphs. We define MINE GRAPH RULE, a new operator for mining association rules from graph databases, by extending classical approaches used in relational databases and exploited by recommending systems. We describe the syntax and semantics of the operator, which is based on measuring the support and confidence of each rule, and then we provide several examples of increasing complexity on top of a realistic example; our operator embeds Cypher for expressing the mining conditions. MINE GRAPH RULE is implemented on top of Neo4j, the most successful graph database system; it takes advantage of built-in optimizations of the Neo4j engine, as well as optimizations that are defined in the context of relational association rules. Our implementation is available as a portable Neo4j plugin. At the end of our paper, we show the execution performance in a variety of settings, by varying the operators, the size of the graph, the ratio between node types, the method for creating relationships, and maximum support and confidence.
|
1505.02898
|
Bikramjit Singh
|
Bikramjit Singh, Konstantinos Koufos, Olav Tirkkonen
|
Coordination protocol for inter-operator spectrum sharing based on
spectrum usage favors
|
Published in proceedings of 23rd edition of European Conference on
Networks and Communications (EuCNC), Bologna, Jun. 2014
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, mobile network operators are allocated spectrum bands on an
exclusive basis. While this approach facilitates interference control, it may
also result in low spectrum utilization efficiency. Inter-operator spectrum
sharing is a potential method to enhance spectrum utilization. In order to
realize it, a protocol to coordinate the actions of operators is needed. We
propose a spectrum sharing protocol which is distributed in nature, it does not
require operator-specific information exchange and it incurs minimal
communication overhead between the operators. Operators are still free to
decide whether they share spectrum or not as the protocol is based on the book
keeping of spectrum usage favors, asked and received by the operators. We show
that operators can enhance their QoS in comparison with traditional orthogonal
spectrum allocation while also maintaining reciprocity i.e. no operator
benefits over the other in the long run. We demonstrate the usability of the
proposed protocol in an indoor deployment scenario with frequent network load
variations as expected to have in small cell deployments.
|
[
{
"created": "Tue, 12 May 2015 08:10:44 GMT",
"version": "v1"
}
] |
2015-05-13
|
[
[
"Singh",
"Bikramjit",
""
],
[
"Koufos",
"Konstantinos",
""
],
[
"Tirkkonen",
"Olav",
""
]
] |
Currently, mobile network operators are allocated spectrum bands on an exclusive basis. While this approach facilitates interference control, it may also result in low spectrum utilization efficiency. Inter-operator spectrum sharing is a potential method to enhance spectrum utilization. In order to realize it, a protocol to coordinate the actions of operators is needed. We propose a spectrum sharing protocol which is distributed in nature, it does not require operator-specific information exchange and it incurs minimal communication overhead between the operators. Operators are still free to decide whether they share spectrum or not as the protocol is based on the book keeping of spectrum usage favors, asked and received by the operators. We show that operators can enhance their QoS in comparison with traditional orthogonal spectrum allocation while also maintaining reciprocity i.e. no operator benefits over the other in the long run. We demonstrate the usability of the proposed protocol in an indoor deployment scenario with frequent network load variations as expected to have in small cell deployments.
|
1908.07820
|
XiaoKang Liu
|
Jianquan Li, Xiaokang Liu, Wenpeng Yin, Min Yang, Liqun Ma, Yaohong
Jin
|
Empirical Evaluation of Multi-task Learning in Deep Neural Networks for
Natural Language Processing
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-Task Learning (MTL) aims at boosting the overall performance of each
individual task by leveraging useful information contained in multiple related
tasks. It has shown great success in natural language processing (NLP).
Currently, a number of MLT architectures and learning mechanisms have been
proposed for various NLP tasks. However, there is no systematic exploration and
comparison of different MLT architectures and learning mechanisms for their
strong performance in-depth. In this paper, we conduct a thorough examination
of typical MTL methods on a broad range of representative NLP tasks. Our
primary goal is to understand the merits and demerits of existing MTL methods
in NLP tasks, thus devising new hybrid architectures intended to combine their
strengths.
|
[
{
"created": "Fri, 16 Aug 2019 03:16:40 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Aug 2020 08:06:18 GMT",
"version": "v2"
}
] |
2020-08-10
|
[
[
"Li",
"Jianquan",
""
],
[
"Liu",
"Xiaokang",
""
],
[
"Yin",
"Wenpeng",
""
],
[
"Yang",
"Min",
""
],
[
"Ma",
"Liqun",
""
],
[
"Jin",
"Yaohong",
""
]
] |
Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks. It has shown great success in natural language processing (NLP). Currently, a number of MLT architectures and learning mechanisms have been proposed for various NLP tasks. However, there is no systematic exploration and comparison of different MLT architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of typical MTL methods on a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths.
|
1704.01893
|
Lukas Holzbaur
|
Lukas Holzbaur, Hannes Bartz, Antonia Wachter-Zeh
|
Improved Decoding and Error Floor Analysis of Staircase Codes
| null | null |
10.1007/s10623-018-0587-x
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Staircase codes play an important role as error-correcting codes in optical
communications. In this paper, a low-complexity method for resolving stall
patterns when decoding staircase codes is described. Stall patterns are the
dominating contributor to the error floor in the original decoding method. Our
improvement is based on locating stall patterns by intersecting non-zero
syndromes and flipping the corresponding bits. The approach effectively lowers
the error floor and allows for a new range of block sizes to be considered for
optical communications at a certain rate or, alternatively, a significantly
decreased error floor for the same block size. Further, an improved error floor
analysis is introduced which provides a more accurate estimation of the
contributions to the error floor.
|
[
{
"created": "Thu, 6 Apr 2017 15:39:52 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2018 08:36:43 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Jul 2018 13:49:30 GMT",
"version": "v3"
},
{
"created": "Mon, 3 Dec 2018 09:26:43 GMT",
"version": "v4"
}
] |
2018-12-04
|
[
[
"Holzbaur",
"Lukas",
""
],
[
"Bartz",
"Hannes",
""
],
[
"Wachter-Zeh",
"Antonia",
""
]
] |
Staircase codes play an important role as error-correcting codes in optical communications. In this paper, a low-complexity method for resolving stall patterns when decoding staircase codes is described. Stall patterns are the dominating contributor to the error floor in the original decoding method. Our improvement is based on locating stall patterns by intersecting non-zero syndromes and flipping the corresponding bits. The approach effectively lowers the error floor and allows for a new range of block sizes to be considered for optical communications at a certain rate or, alternatively, a significantly decreased error floor for the same block size. Further, an improved error floor analysis is introduced which provides a more accurate estimation of the contributions to the error floor.
|
1506.03340
|
Karl Moritz Hermann
|
Karl Moritz Hermann, Tom\'a\v{s} Ko\v{c}isk\'y, Edward Grefenstette,
Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom
|
Teaching Machines to Read and Comprehend
|
Appears in: Advances in Neural Information Processing Systems 28
(NIPS 2015). 14 pages, 13 figures
| null | null | null |
cs.CL cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Teaching machines to read natural language documents remains an elusive
challenge. Machine reading systems can be tested on their ability to answer
questions posed on the contents of documents that they have seen, but until now
large scale training and test datasets have been missing for this type of
evaluation. In this work we define a new methodology that resolves this
bottleneck and provides large scale supervised reading comprehension data. This
allows us to develop a class of attention based deep neural networks that learn
to read real documents and answer complex questions with minimal prior
knowledge of language structure.
|
[
{
"created": "Wed, 10 Jun 2015 14:54:39 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Oct 2015 15:04:49 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Nov 2015 15:43:23 GMT",
"version": "v3"
}
] |
2015-11-20
|
[
[
"Hermann",
"Karl Moritz",
""
],
[
"Kočiský",
"Tomáš",
""
],
[
"Grefenstette",
"Edward",
""
],
[
"Espeholt",
"Lasse",
""
],
[
"Kay",
"Will",
""
],
[
"Suleyman",
"Mustafa",
""
],
[
"Blunsom",
"Phil",
""
]
] |
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.
|
1911.12396
|
Marcelo Firer
|
Marcelo Firer
|
Alternative Metrics
|
This is a chapter for "Concise Encyclopedia of Coding Theory" to be
published by CRC Press
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main scope of this chapter is metrics defined for coding and decoding
purposes, mainly for block codes.
|
[
{
"created": "Wed, 27 Nov 2019 19:46:55 GMT",
"version": "v1"
}
] |
2019-12-02
|
[
[
"Firer",
"Marcelo",
""
]
] |
The main scope of this chapter is metrics defined for coding and decoding purposes, mainly for block codes.
|
1701.05924
|
Maria Cabrera
|
Maria Cabrera, Richard Voyles, Juan Wachs
|
Coherency in One-Shot Gesture Recognition
|
This paper was submitted to a IEEE conference
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User's intentions may be expressed through spontaneous gesturing, which have
been seen only a few times or never before. Recognizing such gestures involves
one shot gesture learning. While most research has focused on the recognition
of the gestures itself, recently new approaches were proposed to deal with
gesture perception and production as part of the same problem. The framework
presented in this work focuses on learning the process that leads to gesture
generation, rather than mining the gesture's associated features. This is
achieved using kinematic, cognitive and biomechanic characteristics of human
interaction. These factors enable the artificial production of realistic
gesture samples originated from a single observation. The generated samples are
then used as training sets for different state-of-the-art classifiers.
Performance is obtained first, by observing the machines' gesture recognition
percentages. Then, performance is computed by the human recognition from
gestures performed by robots. Based on these two scenarios, a composite new
metric of coherency is proposed relating to the amount of agreement between
these two conditions. Experimental results provide an average recognition
performance of 89.2% for the trained classifiers and 92.5% for the
participants. Coherency in recognition was determined at 93.6%. While this new
metric is not directly comparable to raw accuracy or other pure
performance-based standard metrics, it provides a quantifier for validating how
realistic the machine generated samples are and how accurate the resulting
mimicry is.
|
[
{
"created": "Fri, 20 Jan 2017 20:54:10 GMT",
"version": "v1"
}
] |
2017-01-24
|
[
[
"Cabrera",
"Maria",
""
],
[
"Voyles",
"Richard",
""
],
[
"Wachs",
"Juan",
""
]
] |
User's intentions may be expressed through spontaneous gesturing, which have been seen only a few times or never before. Recognizing such gestures involves one shot gesture learning. While most research has focused on the recognition of the gestures itself, recently new approaches were proposed to deal with gesture perception and production as part of the same problem. The framework presented in this work focuses on learning the process that leads to gesture generation, rather than mining the gesture's associated features. This is achieved using kinematic, cognitive and biomechanic characteristics of human interaction. These factors enable the artificial production of realistic gesture samples originated from a single observation. The generated samples are then used as training sets for different state-of-the-art classifiers. Performance is obtained first, by observing the machines' gesture recognition percentages. Then, performance is computed by the human recognition from gestures performed by robots. Based on these two scenarios, a composite new metric of coherency is proposed relating to the amount of agreement between these two conditions. Experimental results provide an average recognition performance of 89.2% for the trained classifiers and 92.5% for the participants. Coherency in recognition was determined at 93.6%. While this new metric is not directly comparable to raw accuracy or other pure performance-based standard metrics, it provides a quantifier for validating how realistic the machine generated samples are and how accurate the resulting mimicry is.
|
2206.04928
|
Mohit Vaishnav
|
Mohit Vaishnav, Thomas Serre
|
GAMR: A Guided Attention Model for (visual) Reasoning
| null |
Eleventh International Conference on Learning Representations
(ICLR) 2023
| null | null |
cs.AI cs.LG cs.NE cs.SC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Humans continue to outperform modern AI systems in their ability to flexibly
parse and understand complex visual scenes. Here, we present a novel module for
visual reasoning, the Guided Attention Model for (visual) Reasoning (GAMR),
which instantiates an active vision theory -- positing that the brain solves
complex visual reasoning problems dynamically -- via sequences of attention
shifts to select and route task-relevant visual information into memory.
Experiments on an array of visual reasoning tasks and datasets demonstrate
GAMR's ability to learn visual routines in a robust and sample-efficient
manner. In addition, GAMR is shown to be capable of zero-shot generalization on
completely novel reasoning tasks. Overall, our work provides computational
support for cognitive theories that postulate the need for a critical interplay
between attention and memory to dynamically maintain and manipulate
task-relevant visual information to solve complex visual reasoning tasks.
|
[
{
"created": "Fri, 10 Jun 2022 07:52:06 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jun 2022 17:52:57 GMT",
"version": "v2"
},
{
"created": "Wed, 21 Sep 2022 10:17:20 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Sep 2022 11:57:12 GMT",
"version": "v4"
},
{
"created": "Tue, 21 Mar 2023 15:35:50 GMT",
"version": "v5"
}
] |
2023-03-22
|
[
[
"Vaishnav",
"Mohit",
""
],
[
"Serre",
"Thomas",
""
]
] |
Humans continue to outperform modern AI systems in their ability to flexibly parse and understand complex visual scenes. Here, we present a novel module for visual reasoning, the Guided Attention Model for (visual) Reasoning (GAMR), which instantiates an active vision theory -- positing that the brain solves complex visual reasoning problems dynamically -- via sequences of attention shifts to select and route task-relevant visual information into memory. Experiments on an array of visual reasoning tasks and datasets demonstrate GAMR's ability to learn visual routines in a robust and sample-efficient manner. In addition, GAMR is shown to be capable of zero-shot generalization on completely novel reasoning tasks. Overall, our work provides computational support for cognitive theories that postulate the need for a critical interplay between attention and memory to dynamically maintain and manipulate task-relevant visual information to solve complex visual reasoning tasks.
|
2303.05862
|
Margherita Bert\`e
|
Margherita Bert\`e, Kyriaki Kalimeri, Daniela Paolotti
|
Monitoring Gender Gaps via LinkedIn Advertising Estimates: the case
study of Italy
|
10 pages
|
In Proceedings of the ACM Web Science Conference 2023 (WebSci
'23), April 30-May 1, 2023, Evanston, TX, USA. ACM, New York, NY, USA, 10
pages
|
10.1145/3578503.3583629
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Women remain underrepresented in the labour market. Although significant
advancements are being made to increase female participation in the workforce,
the gender gap is still far from being bridged. We contribute to the growing
literature on gender inequalities in the labour market, evaluating the
potential of the LinkedIn estimates to monitor the evolution of the gender gaps
sustainably, complementing the official data sources. In particular, assessing
the labour market patterns at a subnational level in Italy. Our findings show
that the LinkedIn estimates accurately capture the gender disparities in Italy
regarding sociodemographic attributes such as gender, age, geographic location,
seniority, and industry category. At the same time, we assess data biases such
as the digitalisation gap, which impacts the representativity of the workforce
in an imbalanced manner, confirming that women are under-represented in
Southern Italy. Additionally to confirming the gender disparities to the
official census, LinkedIn estimates are a valuable tool to provide dynamic
insights; we showed an immigration flow of highly skilled women, predominantly
from the South. Digital surveillance of gender inequalities with detailed and
timely data is particularly significant to enable policymakers to tailor
impactful campaigns.
|
[
{
"created": "Fri, 10 Mar 2023 11:32:45 GMT",
"version": "v1"
}
] |
2023-03-13
|
[
[
"Bertè",
"Margherita",
""
],
[
"Kalimeri",
"Kyriaki",
""
],
[
"Paolotti",
"Daniela",
""
]
] |
Women remain underrepresented in the labour market. Although significant advancements are being made to increase female participation in the workforce, the gender gap is still far from being bridged. We contribute to the growing literature on gender inequalities in the labour market, evaluating the potential of the LinkedIn estimates to monitor the evolution of the gender gaps sustainably, complementing the official data sources. In particular, assessing the labour market patterns at a subnational level in Italy. Our findings show that the LinkedIn estimates accurately capture the gender disparities in Italy regarding sociodemographic attributes such as gender, age, geographic location, seniority, and industry category. At the same time, we assess data biases such as the digitalisation gap, which impacts the representativity of the workforce in an imbalanced manner, confirming that women are under-represented in Southern Italy. Additionally to confirming the gender disparities to the official census, LinkedIn estimates are a valuable tool to provide dynamic insights; we showed an immigration flow of highly skilled women, predominantly from the South. Digital surveillance of gender inequalities with detailed and timely data is particularly significant to enable policymakers to tailor impactful campaigns.
|
2104.00236
|
Jack Li
|
Jianhua Li, Ximeng Liu, Jiong Jin, Shui Yu
|
Too Expensive to Attack: A Joint Defense Framework to Mitigate
Distributed Attacks for the Internet of Things Grid
|
10 pages, 10 figures, 5 tables
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The distributed denial of service (DDoS) attack is detrimental to businesses
and individuals as we are heavily relying on the Internet. Due to remarkable
profits, crackers favor DDoS as cybersecurity weapons in attacking servers,
computers, IoT devices, and even the entire Internet. Many current detection
and mitigation solutions concentrate on specific technologies in combating
DDoS, whereas the attacking expense and the cross-defender collaboration have
not drawn enough attention. Under this circumstance, we revisit the DDoS attack
and defense in terms of attacking cost and populations of both parties,
proposing a joint defense framework to incur higher attacking expense in a grid
of Internet service providers (ISPs), businesses, individuals, and third-party
organizations (IoT Grid). Meanwhile, the defender's cost does not grow much
during combats. The skyrocket of attacking expense discourages profit-driven
attackers from launching further attacks effectively. The quantitative
evaluation and experimental assessment reinforce the effectiveness of our
framework.
|
[
{
"created": "Thu, 1 Apr 2021 03:40:29 GMT",
"version": "v1"
}
] |
2021-04-02
|
[
[
"Li",
"Jianhua",
""
],
[
"Liu",
"Ximeng",
""
],
[
"Jin",
"Jiong",
""
],
[
"Yu",
"Shui",
""
]
] |
The distributed denial of service (DDoS) attack is detrimental to businesses and individuals as we are heavily relying on the Internet. Due to remarkable profits, crackers favor DDoS as cybersecurity weapons in attacking servers, computers, IoT devices, and even the entire Internet. Many current detection and mitigation solutions concentrate on specific technologies in combating DDoS, whereas the attacking expense and the cross-defender collaboration have not drawn enough attention. Under this circumstance, we revisit the DDoS attack and defense in terms of attacking cost and populations of both parties, proposing a joint defense framework to incur higher attacking expense in a grid of Internet service providers (ISPs), businesses, individuals, and third-party organizations (IoT Grid). Meanwhile, the defender's cost does not grow much during combats. The skyrocket of attacking expense discourages profit-driven attackers from launching further attacks effectively. The quantitative evaluation and experimental assessment reinforce the effectiveness of our framework.
|
1701.00146
|
Erik Demaine
|
Jeffrey Bosboom, Erik D. Demaine, Martin L. Demaine, Adam Hesterberg,
Pasin Manurangsi, Anak Yodpinyanee
|
Even $1 \times n$ Edge-Matching and Jigsaw Puzzles are Really Hard
|
22 pages, 9 figures
| null | null | null |
cs.CC cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove the computational intractability of rotating and placing $n$ square
tiles into a $1 \times n$ array such that adjacent tiles are compatible--either
equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes,
as in jigsaw puzzles. Beyond basic NP-hardness, we prove that it is NP-hard
even to approximately maximize the number of placed tiles (allowing blanks),
while satisfying the compatibility constraint between nonblank tiles, within a
factor of 0.9999999851. (On the other hand, there is an easy $1 \over
2$-approximation.) This is the first (correct) proof of inapproximability for
edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of
distinguishing, for a directed graph on $n$ nodes, between having a Hamiltonian
path (length $n-1$) and having at most $0.999999284 (n-1)$ edges that form a
vertex-disjoint union of paths. We use this gap hardness and gap-preserving
reductions to establish similar gap hardness for $1 \times n$ jigsaw and
edge-matching puzzles.
|
[
{
"created": "Sat, 31 Dec 2016 17:05:53 GMT",
"version": "v1"
}
] |
2017-01-03
|
[
[
"Bosboom",
"Jeffrey",
""
],
[
"Demaine",
"Erik D.",
""
],
[
"Demaine",
"Martin L.",
""
],
[
"Hesterberg",
"Adam",
""
],
[
"Manurangsi",
"Pasin",
""
],
[
"Yodpinyanee",
"Anak",
""
]
] |
We prove the computational intractability of rotating and placing $n$ square tiles into a $1 \times n$ array such that adjacent tiles are compatible--either equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes, as in jigsaw puzzles. Beyond basic NP-hardness, we prove that it is NP-hard even to approximately maximize the number of placed tiles (allowing blanks), while satisfying the compatibility constraint between nonblank tiles, within a factor of 0.9999999851. (On the other hand, there is an easy $1 \over 2$-approximation.) This is the first (correct) proof of inapproximability for edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of distinguishing, for a directed graph on $n$ nodes, between having a Hamiltonian path (length $n-1$) and having at most $0.999999284 (n-1)$ edges that form a vertex-disjoint union of paths. We use this gap hardness and gap-preserving reductions to establish similar gap hardness for $1 \times n$ jigsaw and edge-matching puzzles.
|
1002.1691
|
Rdv Ijcsis
|
Sumon Kumar Debnath, Foez Ahmed, Nayeema Islam
|
Performance Evaluation of Unicast and Broadcast Mobile Ad hoc Network
Routing Protocols
|
7 Pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS January 2010, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/
|
International Journal of Computer Science and Information
Security, IJCSIS, Vol. 7, No. 1, pp. 40-46, January 2010, USA
| null |
Computer Science Volume 7 ISSN 19475500
|
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient routing mechanism is a challenging issue for group oriented
computing in Mobile Ad Hoc Networks (MANETs). The ability of MANETs to support
adequate Quality of Service (QoS) for group communication is limited by the
ability of the underlying ad-hoc routing protocols to provide consistent
behavior despite the dynamic properties of mobile computing devices. In MANET
QoS requirements can be quantified in terms of Packet Delivery Ratio (PDR),
Data Latency, Packet Loss Probability, Routing Overhead, Medium Access Control
(MAC) Overhead and Data Throughput etc. This paper presents an in depth study
of one to many and many to many communications in MANETs and provides a
comparative performance evaluation of unicast and broadcast routing protocols.
Dynamic Source Routing protocol (DSR) is used as unicast protocol and BCAST is
used to represent broadcast protocol. The performance differentials are
analyzed using ns2 network simulator varying multicast group size (number of
data senders and data receivers). Both protocols are simulated with identical
traffic loads and mobility models. Simulation result shows that BCAST performs
better than DSR in most cases.
|
[
{
"created": "Mon, 8 Feb 2010 19:12:53 GMT",
"version": "v1"
}
] |
2010-02-09
|
[
[
"Debnath",
"Sumon Kumar",
""
],
[
"Ahmed",
"Foez",
""
],
[
"Islam",
"Nayeema",
""
]
] |
Efficient routing mechanism is a challenging issue for group oriented computing in Mobile Ad Hoc Networks (MANETs). The ability of MANETs to support adequate Quality of Service (QoS) for group communication is limited by the ability of the underlying ad-hoc routing protocols to provide consistent behavior despite the dynamic properties of mobile computing devices. In MANET QoS requirements can be quantified in terms of Packet Delivery Ratio (PDR), Data Latency, Packet Loss Probability, Routing Overhead, Medium Access Control (MAC) Overhead and Data Throughput etc. This paper presents an in depth study of one to many and many to many communications in MANETs and provides a comparative performance evaluation of unicast and broadcast routing protocols. Dynamic Source Routing protocol (DSR) is used as unicast protocol and BCAST is used to represent broadcast protocol. The performance differentials are analyzed using ns2 network simulator varying multicast group size (number of data senders and data receivers). Both protocols are simulated with identical traffic loads and mobility models. Simulation result shows that BCAST performs better than DSR in most cases.
|
1903.01298
|
Elvin Isufi
|
Elvin Isufi, Fernando Gama, Alejandro Ribeiro
|
Generalizing Graph Convolutional Neural Networks with Edge-Variant
Recursions on Graphs
|
submitted to EUSIPCO 2019
| null | null | null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reviews graph convolutional neural networks (GCNNs) through the
lens of edge-variant graph filters. The edge-variant graph filter is a finite
order, linear, and local recursion that allows each node, in each iteration, to
weigh differently the information of its neighbors. By exploiting this
recursion, we formulate a general framework for GCNNs which considers
state-of-the-art solutions as particular cases. This framework results useful
to i) understand the tradeoff between local detail and the number of parameters
of each solution and ii) provide guidelines for developing a myriad of novel
approaches that can be implemented locally in the vertex domain. One of such
approaches is presented here showing superior performance w.r.t. current
alternatives in graph signal classification problems.
|
[
{
"created": "Mon, 4 Mar 2019 15:05:36 GMT",
"version": "v1"
}
] |
2019-03-05
|
[
[
"Isufi",
"Elvin",
""
],
[
"Gama",
"Fernando",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] |
This paper reviews graph convolutional neural networks (GCNNs) through the lens of edge-variant graph filters. The edge-variant graph filter is a finite order, linear, and local recursion that allows each node, in each iteration, to weigh differently the information of its neighbors. By exploiting this recursion, we formulate a general framework for GCNNs which considers state-of-the-art solutions as particular cases. This framework results useful to i) understand the tradeoff between local detail and the number of parameters of each solution and ii) provide guidelines for developing a myriad of novel approaches that can be implemented locally in the vertex domain. One of such approaches is presented here showing superior performance w.r.t. current alternatives in graph signal classification problems.
|
1809.00952
|
Irvin Aloise
|
Irvin Aloise and Giorgio Grisetti
|
Matrix Difference in Pose-Graph Optimization
|
10 pages, 7 figures, source:
https://srrg.gitlab.io/g2o_chordal_plugin.html
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pose-Graph optimization is a crucial component of many modern SLAM systems.
Most prominent state of the art systems address this problem by iterative
non-linear least squares. Both number of iterations and convergence basin of
these approaches depend on the error functions used to describe the problem.
The smoother and more convex the error function with respect to perturbations
of the state variables, the better the least-squares solver will perform. In
this paper we propose an alternative error function obtained by removing some
non-linearities from the standard used one - i.e. the geodesic error function.
Comparative experiments conducted on common benchmarking datasets confirm that
our function is more robust to noise that affects the rotational component of
the pose measurements and, thus, exhibits a larger convergence basin than the
geodesic. Furthermore, its implementation is relatively easy compared to the
geodesic distance. This property leads to rather simple derivatives and nice
numerical properties of the Jacobians resulting from the effective computation
of the quadratic approximation used by Gauss-Newton algorithm.
|
[
{
"created": "Tue, 4 Sep 2018 13:41:01 GMT",
"version": "v1"
}
] |
2018-09-05
|
[
[
"Aloise",
"Irvin",
""
],
[
"Grisetti",
"Giorgio",
""
]
] |
Pose-Graph optimization is a crucial component of many modern SLAM systems. Most prominent state of the art systems address this problem by iterative non-linear least squares. Both number of iterations and convergence basin of these approaches depend on the error functions used to describe the problem. The smoother and more convex the error function with respect to perturbations of the state variables, the better the least-squares solver will perform. In this paper we propose an alternative error function obtained by removing some non-linearities from the standard used one - i.e. the geodesic error function. Comparative experiments conducted on common benchmarking datasets confirm that our function is more robust to noise that affects the rotational component of the pose measurements and, thus, exhibits a larger convergence basin than the geodesic. Furthermore, its implementation is relatively easy compared to the geodesic distance. This property leads to rather simple derivatives and nice numerical properties of the Jacobians resulting from the effective computation of the quadratic approximation used by Gauss-Newton algorithm.
|
2302.05959
|
Yanheng Li
|
Yanheng Li, Lin Luoying, Xinyan Li, Yaxuan Mao, Ray Lc
|
"Nice to meet you!": Expressing Emotions with Movement Gestures and
Textual Content in Automatic Handwriting Robots
|
HRI 2023 LBR
| null |
10.1145/3568294.3580045
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-writing robots have been used in assistive writing and drawing
applications. However, robots do not convey emotional tones in the writing
process due to the lack of behaviors humans typically adopt. To examine how
people interpret designed robotic expressions of emotion through both movements
and textual output, we used a pen-plotting robot to generate texts by
performing human-like behaviors like stop-and-go, speed, and pressure
variation. We examined how people convey emotion in the writing process by
observing how they wrote in different emotional contexts. We then mapped these
human expressions during writing to the handwriting robot and measured how well
other participants understood the robot's affective expression. We found that
textual output was the strongest determinant of participants' ability to
perceive the robot's emotions, whereas parameters of gestural movements of the
robots like speed, fluency, pressure, size, and acceleration could be useful
for understanding the context of the writing expression.
|
[
{
"created": "Sun, 12 Feb 2023 17:13:25 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Li",
"Yanheng",
""
],
[
"Luoying",
"Lin",
""
],
[
"Li",
"Xinyan",
""
],
[
"Mao",
"Yaxuan",
""
],
[
"Lc",
"Ray",
""
]
] |
Text-writing robots have been used in assistive writing and drawing applications. However, robots do not convey emotional tones in the writing process due to the lack of behaviors humans typically adopt. To examine how people interpret designed robotic expressions of emotion through both movements and textual output, we used a pen-plotting robot to generate texts by performing human-like behaviors like stop-and-go, speed, and pressure variation. We examined how people convey emotion in the writing process by observing how they wrote in different emotional contexts. We then mapped these human expressions during writing to the handwriting robot and measured how well other participants understood the robot's affective expression. We found that textual output was the strongest determinant of participants' ability to perceive the robot's emotions, whereas parameters of gestural movements of the robots like speed, fluency, pressure, size, and acceleration could be useful for understanding the context of the writing expression.
|
2107.07116
|
Feng Shi
|
Feng Shi, Chonghan Lee, Mohammad Khairul Bashar, Nikhil Shukla,
Song-Chun Zhu and Vijaykrishnan Narayanan
|
Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis
| null | null | null | null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CNF-based SAT and MaxSAT solvers are central to logic synthesis and
verification systems. The increasing popularity of these constraint problems in
electronic design automation encourages studies on different SAT problems and
their properties for further computational efficiency. There has been both
theoretical and practical success of modern Conflict-driven clause learning SAT
solvers, which allows solving very large industrial instances in a relatively
short amount of time. Recently, machine learning approaches provide a new
dimension to solving this challenging problem. Neural symbolic models could
serve as generic solvers that can be specialized for specific domains based on
data without any changes to the structure of the model. In this work, we
propose a one-shot model derived from the Transformer architecture to solve the
MaxSAT problem, which is the optimization version of SAT where the goal is to
satisfy the maximum number of clauses. Our model has a scale-free structure
which could process varying size of instances. We use meta-path and
self-attention mechanism to capture interactions among homogeneous nodes. We
adopt cross-attention mechanisms on the bipartite graph to capture interactions
among heterogeneous nodes. We further apply an iterative algorithm to our model
to satisfy additional clauses, enabling a solution approaching that of an
exact-SAT problem. The attention mechanisms leverage the parallelism for
speedup. Our evaluation indicates improved speedup compared to heuristic
approaches and improved completion rate compared to machine learning
approaches.
|
[
{
"created": "Thu, 15 Jul 2021 04:47:35 GMT",
"version": "v1"
}
] |
2021-07-16
|
[
[
"Shi",
"Feng",
""
],
[
"Lee",
"Chonghan",
""
],
[
"Bashar",
"Mohammad Khairul",
""
],
[
"Shukla",
"Nikhil",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
]
] |
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems. The increasing popularity of these constraint problems in electronic design automation encourages studies on different SAT problems and their properties for further computational efficiency. There has been both theoretical and practical success of modern Conflict-driven clause learning SAT solvers, which allows solving very large industrial instances in a relatively short amount of time. Recently, machine learning approaches provide a new dimension to solving this challenging problem. Neural symbolic models could serve as generic solvers that can be specialized for specific domains based on data without any changes to the structure of the model. In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem, which is the optimization version of SAT where the goal is to satisfy the maximum number of clauses. Our model has a scale-free structure which could process varying size of instances. We use meta-path and self-attention mechanism to capture interactions among homogeneous nodes. We adopt cross-attention mechanisms on the bipartite graph to capture interactions among heterogeneous nodes. We further apply an iterative algorithm to our model to satisfy additional clauses, enabling a solution approaching that of an exact-SAT problem. The attention mechanisms leverage the parallelism for speedup. Our evaluation indicates improved speedup compared to heuristic approaches and improved completion rate compared to machine learning approaches.
|
2106.13679
|
Giovanni Trappolini
|
Giovanni Trappolini, Luca Cosmo, Luca Moschella, Riccardo Marin,
Simone Melzi, Emanuele Rodol\`a
|
Shape registration in the time of transformers
| null | null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a transformer-based procedure for the efficient
registration of non-rigid 3D point clouds. The proposed approach is data-driven
and adopts for the first time the transformer architecture in the registration
task. Our method is general and applies to different settings. Given a fixed
template with some desired properties (e.g. skinning weights or other animation
cues), we can register raw acquired data to it, thereby transferring all the
template properties to the input geometry. Alternatively, given a pair of
shapes, our method can register the first onto the second (or vice-versa),
obtaining a high-quality dense correspondence between the two. In both
contexts, the quality of our results enables us to target real applications
such as texture transfer and shape interpolation. Furthermore, we also show
that including an estimation of the underlying density of the surface eases the
learning process. By exploiting the potential of this architecture, we can
train our model requiring only a sparse set of ground truth correspondences
($10\sim20\%$ of the total points). The proposed model and the analysis that we
perform pave the way for future exploration of transformer-based architectures
for registration and matching applications. Qualitative and quantitative
evaluations demonstrate that our pipeline outperforms state-of-the-art methods
for deformable and unordered 3D data registration on different datasets and
scenarios.
|
[
{
"created": "Fri, 25 Jun 2021 15:02:30 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 07:56:20 GMT",
"version": "v2"
}
] |
2021-06-29
|
[
[
"Trappolini",
"Giovanni",
""
],
[
"Cosmo",
"Luca",
""
],
[
"Moschella",
"Luca",
""
],
[
"Marin",
"Riccardo",
""
],
[
"Melzi",
"Simone",
""
],
[
"Rodolà",
"Emanuele",
""
]
] |
In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformer architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two. In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation. Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences ($10\sim20\%$ of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios.
|
1801.07743
|
Pedro Saleiro
|
Pedro Saleiro
|
Entity Retrieval and Text Mining for Online Reputation Monitoring
|
PhD Thesis
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online Reputation Monitoring (ORM) is concerned with the use of computational
tools to measure the reputation of entities online, such as politicians or
companies. In practice, current ORM methods are constrained to the generation
of data analytics reports, which aggregate statistics of popularity and
sentiment on social media. We argue that this format is too restrictive as end
users often like to have the flexibility to search for entity-centric
information that is not available in predefined charts. As such, we propose the
inclusion of entity retrieval capabilities as a first step towards the
extension of current ORM capabilities. However, an entity's reputation is also
influenced by the entity's relationships with other entities. Therefore, we
address the problem of Entity-Relationship (E-R) retrieval in which the goal is
to search for multiple connected entities. This is a challenging problem which
traditional entity search systems cannot cope with. Besides E-R retrieval we
also believe ORM would benefit of text-based entity-centric prediction
capabilities, such as predicting entity popularity on social media based on
news events or the outcome of political surveys. However, none of these tasks
can provide useful results if there is no effective entity disambiguation and
sentiment analysis tailored to the context of ORM. Consequently, this thesis
address two computational problems in Online Reputation Monitoring: Entity
Retrieval and Text Mining. We researched and developed methods to extract,
retrieve and predict entity-centric information spread across the Web.
|
[
{
"created": "Tue, 23 Jan 2018 19:36:29 GMT",
"version": "v1"
}
] |
2018-01-25
|
[
[
"Saleiro",
"Pedro",
""
]
] |
Online Reputation Monitoring (ORM) is concerned with the use of computational tools to measure the reputation of entities online, such as politicians or companies. In practice, current ORM methods are constrained to the generation of data analytics reports, which aggregate statistics of popularity and sentiment on social media. We argue that this format is too restrictive as end users often like to have the flexibility to search for entity-centric information that is not available in predefined charts. As such, we propose the inclusion of entity retrieval capabilities as a first step towards the extension of current ORM capabilities. However, an entity's reputation is also influenced by the entity's relationships with other entities. Therefore, we address the problem of Entity-Relationship (E-R) retrieval in which the goal is to search for multiple connected entities. This is a challenging problem which traditional entity search systems cannot cope with. Besides E-R retrieval we also believe ORM would benefit of text-based entity-centric prediction capabilities, such as predicting entity popularity on social media based on news events or the outcome of political surveys. However, none of these tasks can provide useful results if there is no effective entity disambiguation and sentiment analysis tailored to the context of ORM. Consequently, this thesis address two computational problems in Online Reputation Monitoring: Entity Retrieval and Text Mining. We researched and developed methods to extract, retrieve and predict entity-centric information spread across the Web.
|
1811.08203
|
Noveen Sachdeva
|
Noveen Sachdeva, Kartik Gupta, Vikram Pudi
|
Attentive Neural Architecture Incorporating Song Features For Music
Recommendation
|
Accepted as a paper at the 12th ACM Conference on Recommender Systems
(RecSys 18)
|
12th ACM Conference on Recommender Systems (RecSys '18). ACM
(2018) 417-421
|
10.1145/3240323.3240397
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender Systems are an integral part of music sharing platforms. Often
the aim of these systems is to increase the time, the user spends on the
platform and hence having a high commercial value. The systems which aim at
increasing the average time a user spends on the platform often need to
recommend songs which the user might want to listen to next at each point in
time. This is different from recommendation systems which try to predict the
item which might be of interest to the user at some point in the user lifetime
but not necessarily in the very near future. Prediction of the next song the
user might like requires some kind of modeling of the user interests at the
given point of time. Attentive neural networks have been exploiting the
sequence in which the items were selected by the user to model the implicit
short-term interests of the user for the task of next item prediction, however
we feel that the features of the songs occurring in the sequence could also
convey some important information about the short-term user interest which only
the items cannot. In this direction, we propose a novel attentive neural
architecture which in addition to the sequence of items selected by the user,
uses the features of these items to better learn the user short-term
preferences and recommend the next song to the user.
|
[
{
"created": "Tue, 20 Nov 2018 12:10:06 GMT",
"version": "v1"
}
] |
2018-11-21
|
[
[
"Sachdeva",
"Noveen",
""
],
[
"Gupta",
"Kartik",
""
],
[
"Pudi",
"Vikram",
""
]
] |
Recommender Systems are an integral part of music sharing platforms. Often the aim of these systems is to increase the time, the user spends on the platform and hence having a high commercial value. The systems which aim at increasing the average time a user spends on the platform often need to recommend songs which the user might want to listen to next at each point in time. This is different from recommendation systems which try to predict the item which might be of interest to the user at some point in the user lifetime but not necessarily in the very near future. Prediction of the next song the user might like requires some kind of modeling of the user interests at the given point of time. Attentive neural networks have been exploiting the sequence in which the items were selected by the user to model the implicit short-term interests of the user for the task of next item prediction, however we feel that the features of the songs occurring in the sequence could also convey some important information about the short-term user interest which only the items cannot. In this direction, we propose a novel attentive neural architecture which in addition to the sequence of items selected by the user, uses the features of these items to better learn the user short-term preferences and recommend the next song to the user.
|
0710.2139
|
Ashkan Aazami
|
Ashkan Aazami, Michael D. Stilp
|
Approximation algorithms and hardness for domination with propagation
| null | null | null | null |
cs.CC cs.DM
| null |
The power dominating set (PDS) problem is the following extension of the
well-known dominating set problem: find a smallest-size set of nodes $S$ that
power dominates all the nodes, where a node $v$ is power dominated if (1) $v$
is in $S$ or $v$ has a neighbor in $S$, or (2) $v$ has a neighbor $w$ such that
$w$ and all of its neighbors except $v$ are power dominated. We show a hardness
of approximation threshold of $2^{\log^{1-\epsilon}{n}}$ in contrast to the
logarithmic hardness for the dominating set problem. We give an $O(\sqrt{n})$
approximation algorithm for planar graphs, and show that our methods cannot
improve on this approximation guarantee. Finally, we initiate the study of PDS
on directed graphs, and show the same hardness threshold of
$2^{\log^{1-\epsilon}{n}}$ for directed \emph{acyclic} graphs. Also we show
that the directed PDS problem can be solved optimally in linear time if the
underlying undirected graph has bounded tree-width.
|
[
{
"created": "Wed, 10 Oct 2007 23:30:49 GMT",
"version": "v1"
}
] |
2007-10-12
|
[
[
"Aazami",
"Ashkan",
""
],
[
"Stilp",
"Michael D.",
""
]
] |
The power dominating set (PDS) problem is the following extension of the well-known dominating set problem: find a smallest-size set of nodes $S$ that power dominates all the nodes, where a node $v$ is power dominated if (1) $v$ is in $S$ or $v$ has a neighbor in $S$, or (2) $v$ has a neighbor $w$ such that $w$ and all of its neighbors except $v$ are power dominated. We show a hardness of approximation threshold of $2^{\log^{1-\epsilon}{n}}$ in contrast to the logarithmic hardness for the dominating set problem. We give an $O(\sqrt{n})$ approximation algorithm for planar graphs, and show that our methods cannot improve on this approximation guarantee. Finally, we initiate the study of PDS on directed graphs, and show the same hardness threshold of $2^{\log^{1-\epsilon}{n}}$ for directed \emph{acyclic} graphs. Also we show that the directed PDS problem can be solved optimally in linear time if the underlying undirected graph has bounded tree-width.
|
2110.06750
|
Darius Sas
|
Darius Sas, Ilaria Pigazzini, Paris Avgeriou, Francesca Arcelli
Fontana
|
The perception of Architectural Smells in industrial practice
|
Submitted and accepted to IEEE Software special issue on Technical
Debt. This is a preprint
| null |
10.1109/MS.2021.3103664
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Architectural Technical Debt (ATD) is considered as the most significant type
of TD in industrial practice. In this study, we interview 21 software engineers
and architects to investigate a specific type of ATD, namely architectural
smells (AS). Our goal is to understand the phenomenon of AS better and support
practitioners to better manage it and researchers to offer relevant support.
The findings of this study provide insights on how practitioners perceive AS
and how they introduce them, the maintenance and evolution issues they
experienced and associated to the presence of AS, and what practices and tools
they adopt to manage AS.
|
[
{
"created": "Wed, 13 Oct 2021 14:29:31 GMT",
"version": "v1"
}
] |
2022-03-17
|
[
[
"Sas",
"Darius",
""
],
[
"Pigazzini",
"Ilaria",
""
],
[
"Avgeriou",
"Paris",
""
],
[
"Fontana",
"Francesca Arcelli",
""
]
] |
Architectural Technical Debt (ATD) is considered as the most significant type of TD in industrial practice. In this study, we interview 21 software engineers and architects to investigate a specific type of ATD, namely architectural smells (AS). Our goal is to understand the phenomenon of AS better and support practitioners to better manage it and researchers to offer relevant support. The findings of this study provide insights on how practitioners perceive AS and how they introduce them, the maintenance and evolution issues they experienced and associated to the presence of AS, and what practices and tools they adopt to manage AS.
|
2008.09645
|
Sheng Liu
|
Sheng Liu, Zuo-Jun Max Shen, Xiang Ji
|
Urban Bike Lane Planning with Bike Trajectories: Models, Algorithms, and
a Real-World Case Study
| null | null | null | null |
cs.AI cs.CE math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study an urban bike lane planning problem based on the fine-grained bike
trajectory data, which is made available by smart city infrastructure such as
bike-sharing systems. The key decision is where to build bike lanes in the
existing road network. As bike-sharing systems become widespread in the
metropolitan areas over the world, bike lanes are being planned and constructed
by many municipal governments to promote cycling and protect cyclists.
Traditional bike lane planning approaches often rely on surveys and heuristics.
We develop a general and novel optimization framework to guide the bike lane
planning from bike trajectories. We formalize the bike lane planning problem in
view of the cyclists' utility functions and derive an integer optimization
model to maximize the utility. To capture cyclists' route choices, we develop a
bilevel program based on the Multinomial Logit model. We derive structural
properties about the base model and prove that the Lagrangian dual of the bike
lane planning model is polynomial-time solvable. Furthermore, we reformulate
the route choice based planning model as a mixed integer linear program using a
linear approximation scheme. We develop tractable formulations and efficient
algorithms to solve the large-scale optimization problem. Via a real-world case
study with a city government, we demonstrate the efficiency of the proposed
algorithms and quantify the trade-off between the coverage of bike trips and
continuity of bike lanes. We show how the network topology evolves according to
the utility functions and highlight the importance of understanding cyclists'
route choices. The proposed framework drives the data-driven urban planning
scheme in smart city operations management.
|
[
{
"created": "Fri, 21 Aug 2020 18:46:51 GMT",
"version": "v1"
}
] |
2020-08-25
|
[
[
"Liu",
"Sheng",
""
],
[
"Shen",
"Zuo-Jun Max",
""
],
[
"Ji",
"Xiang",
""
]
] |
We study an urban bike lane planning problem based on the fine-grained bike trajectory data, which is made available by smart city infrastructure such as bike-sharing systems. The key decision is where to build bike lanes in the existing road network. As bike-sharing systems become widespread in the metropolitan areas over the world, bike lanes are being planned and constructed by many municipal governments to promote cycling and protect cyclists. Traditional bike lane planning approaches often rely on surveys and heuristics. We develop a general and novel optimization framework to guide the bike lane planning from bike trajectories. We formalize the bike lane planning problem in view of the cyclists' utility functions and derive an integer optimization model to maximize the utility. To capture cyclists' route choices, we develop a bilevel program based on the Multinomial Logit model. We derive structural properties about the base model and prove that the Lagrangian dual of the bike lane planning model is polynomial-time solvable. Furthermore, we reformulate the route choice based planning model as a mixed integer linear program using a linear approximation scheme. We develop tractable formulations and efficient algorithms to solve the large-scale optimization problem. Via a real-world case study with a city government, we demonstrate the efficiency of the proposed algorithms and quantify the trade-off between the coverage of bike trips and continuity of bike lanes. We show how the network topology evolves according to the utility functions and highlight the importance of understanding cyclists' route choices. The proposed framework drives the data-driven urban planning scheme in smart city operations management.
|
1212.3879
|
EPTCS
|
Jurriaan Rot (LIACS - Leiden University, The Netherlands), Irina
M\u{a}riuca As\u{a}voae (Alexandru Ioan Cuza University, Romania), Frank de
Boer (Centrum Wiskunde en Informatica (CWI), The Netherlands), Marcello M.
Bonsangue (LIACS - Leiden University, The Netherlands), Dorel Lucanu
(Alexandru Ioan Cuza University, Romania)
|
Interacting via the Heap in the Presence of Recursion
|
In Proceedings ICE 2012, arXiv:1212.3458
|
EPTCS 104, 2012, pp. 99-113
|
10.4204/EPTCS.104.9
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Almost all modern imperative programming languages include operations for
dynamically manipulating the heap, for example by allocating and deallocating
objects, and by updating reference fields. In the presence of recursive
procedures and local variables the interactions of a program with the heap can
become rather complex, as an unbounded number of objects can be allocated
either on the call stack using local variables, or, anonymously, on the heap
using reference fields. As such a static analysis is, in general, undecidable.
In this paper we study the verification of recursive programs with unbounded
allocation of objects, in a simple imperative language for heap manipulation.
We present an improved semantics for this language, using an abstraction that
is precise. For any program with a bounded visible heap, meaning that the
number of objects reachable from variables at any point of execution is
bounded, this abstraction is a finitary representation of its behaviour, even
though an unbounded number of objects can appear in the state. As a
consequence, for such programs model checking is decidable.
Finally we introduce a specification language for temporal properties of the
heap, and discuss model checking these properties against heap-manipulating
programs.
|
[
{
"created": "Mon, 17 Dec 2012 03:42:47 GMT",
"version": "v1"
}
] |
2012-12-18
|
[
[
"Rot",
"Jurriaan",
"",
"LIACS - Leiden University, The Netherlands"
],
[
"Asăvoae",
"Irina Măriuca",
"",
"Alexandru Ioan Cuza University, Romania"
],
[
"de Boer",
"Frank",
"",
"Centrum Wiskunde en Informatica"
],
[
"Bonsangue",
"Marcello M.",
"",
"LIACS - Leiden University, The Netherlands"
],
[
"Lucanu",
"Dorel",
"",
"Alexandru Ioan Cuza University, Romania"
]
] |
Almost all modern imperative programming languages include operations for dynamically manipulating the heap, for example by allocating and deallocating objects, and by updating reference fields. In the presence of recursive procedures and local variables the interactions of a program with the heap can become rather complex, as an unbounded number of objects can be allocated either on the call stack using local variables, or, anonymously, on the heap using reference fields. As such a static analysis is, in general, undecidable. In this paper we study the verification of recursive programs with unbounded allocation of objects, in a simple imperative language for heap manipulation. We present an improved semantics for this language, using an abstraction that is precise. For any program with a bounded visible heap, meaning that the number of objects reachable from variables at any point of execution is bounded, this abstraction is a finitary representation of its behaviour, even though an unbounded number of objects can appear in the state. As a consequence, for such programs model checking is decidable. Finally we introduce a specification language for temporal properties of the heap, and discuss model checking these properties against heap-manipulating programs.
|
1903.00565
|
Behnam Askarian
|
Fatemeh Sadat Tabei and Behnam Askarian
|
Effect of Proxy Nodes on the Performance of TCP-Based Transport Layer
Protocols in Wireless Sensor Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless Sensor Networks have recently attracted many researchers attentions
due to their wide range of applications. Even though a plethora of studies have
been carried out on characteristics, special conditions, and various aspects of
WSNs, transport protocol which is compatible with conditions of Wireless Sensor
Networks has not been considerably addressed. Wireless Sensor Networks have
limitations such as storage space, energy resources, and wireless communication
issues. Accordingly, widely-used transport protocols like Transmission Control
Protocol (TCP) may not enjoy sufficient efficiency in such networks. In this
paper, we study the characteristics of WSNs leading to design transport layer
protocol for WSNs and aim at evaluating the efficiency of TCP and its dependent
protocols (TCP variables), which are introduced to wireless networks. We
propose to employ proxy nodes near sinks to improve the performance of
transport layer. Our NS-2 simulation results indicate that throughput and
packet delivery ratio are improved from 20 to 50 percent after employing proxy
nodes, while the average message delay is almost increased twice.
|
[
{
"created": "Fri, 1 Mar 2019 22:31:03 GMT",
"version": "v1"
}
] |
2019-03-05
|
[
[
"Tabei",
"Fatemeh Sadat",
""
],
[
"Askarian",
"Behnam",
""
]
] |
Wireless Sensor Networks have recently attracted many researchers attentions due to their wide range of applications. Even though a plethora of studies have been carried out on characteristics, special conditions, and various aspects of WSNs, transport protocol which is compatible with conditions of Wireless Sensor Networks has not been considerably addressed. Wireless Sensor Networks have limitations such as storage space, energy resources, and wireless communication issues. Accordingly, widely-used transport protocols like Transmission Control Protocol (TCP) may not enjoy sufficient efficiency in such networks. In this paper, we study the characteristics of WSNs leading to design transport layer protocol for WSNs and aim at evaluating the efficiency of TCP and its dependent protocols (TCP variables), which are introduced to wireless networks. We propose to employ proxy nodes near sinks to improve the performance of transport layer. Our NS-2 simulation results indicate that throughput and packet delivery ratio are improved from 20 to 50 percent after employing proxy nodes, while the average message delay is almost increased twice.
|
2112.13808
|
Jamin Shin
|
Jamin Shin, Juneyoung Park
|
Pedagogical Word Recommendation: A novel task and dataset on
personalized vocabulary acquisition for L2 learners
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When learning a second language (L2), one of the most important but tedious
components that often demoralizes students with its ineffectiveness and
inefficiency is vocabulary acquisition, or more simply put, memorizing words.
In light of such, a personalized and educational vocabulary recommendation
system that traces a learner's vocabulary knowledge state would have an immense
learning impact as it could resolve both issues. Therefore, in this paper, we
propose and release data for a novel task called Pedagogical Word
Recommendation (PWR). The main goal of PWR is to predict whether a given
learner knows a given word based on other words the learner has already seen.
To elaborate, we collect this data via an Intelligent Tutoring System (ITS)
that is serviced to ~1M L2 learners who study for the standardized English
exam, TOEIC. As a feature of this ITS, students can directly indicate words
they do not know from the questions they solved to create wordbooks. Finally,
we report the evaluation results of a Neural Collaborative Filtering approach
along with an exploratory data analysis and discuss the impact and efficacy of
this dataset as a baseline for future studies on this task.
|
[
{
"created": "Mon, 27 Dec 2021 17:52:48 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Dec 2021 04:52:26 GMT",
"version": "v2"
}
] |
2021-12-30
|
[
[
"Shin",
"Jamin",
""
],
[
"Park",
"Juneyoung",
""
]
] |
When learning a second language (L2), one of the most important but tedious components that often demoralizes students with its ineffectiveness and inefficiency is vocabulary acquisition, or more simply put, memorizing words. In light of such, a personalized and educational vocabulary recommendation system that traces a learner's vocabulary knowledge state would have an immense learning impact as it could resolve both issues. Therefore, in this paper, we propose and release data for a novel task called Pedagogical Word Recommendation (PWR). The main goal of PWR is to predict whether a given learner knows a given word based on other words the learner has already seen. To elaborate, we collect this data via an Intelligent Tutoring System (ITS) that is serviced to ~1M L2 learners who study for the standardized English exam, TOEIC. As a feature of this ITS, students can directly indicate words they do not know from the questions they solved to create wordbooks. Finally, we report the evaluation results of a Neural Collaborative Filtering approach along with an exploratory data analysis and discuss the impact and efficacy of this dataset as a baseline for future studies on this task.
|
2403.01649
|
Steven Bellovin
|
Susan Landau, James X. Dempsey, Ece Kamar, Steven M. Bellovin
|
Recommendations for Government Development and Use of Advanced Automated
Systems to Make Decisions about Individuals
| null | null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Contestability -- the ability to effectively challenge a decision -- is
critical to the implementation of fairness. In the context of governmental
decision making about individuals, contestability is often constitutionally
required as an element of due process; specific procedures may be required by
state or federal law relevant to a particular program. In addition,
contestability can be a valuable way to discover systemic errors, contributing
to ongoing assessments and system improvement.
On January 24-25, 2024, with support from the National Science Foundation and
the William and Flora Hewlett Foundation, we convened a diverse group of
government officials, representatives of leading technology companies,
technology and policy experts from academia and the non-profit sector,
advocates, and stakeholders for a workshop on advanced automated decision
making, contestability, and the law. Informed by the workshop's rich and
wide-ranging discussion, we offer these recommendations. A full report
summarizing the discussion is in preparation.
|
[
{
"created": "Mon, 4 Mar 2024 00:03:00 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Landau",
"Susan",
""
],
[
"Dempsey",
"James X.",
""
],
[
"Kamar",
"Ece",
""
],
[
"Bellovin",
"Steven M.",
""
]
] |
Contestability -- the ability to effectively challenge a decision -- is critical to the implementation of fairness. In the context of governmental decision making about individuals, contestability is often constitutionally required as an element of due process; specific procedures may be required by state or federal law relevant to a particular program. In addition, contestability can be a valuable way to discover systemic errors, contributing to ongoing assessments and system improvement. On January 24-25, 2024, with support from the National Science Foundation and the William and Flora Hewlett Foundation, we convened a diverse group of government officials, representatives of leading technology companies, technology and policy experts from academia and the non-profit sector, advocates, and stakeholders for a workshop on advanced automated decision making, contestability, and the law. Informed by the workshop's rich and wide-ranging discussion, we offer these recommendations. A full report summarizing the discussion is in preparation.
|
2108.09956
|
Deborah Amos Phiri
|
Deborah Amos Phiri and Chipo Kanjo
|
Policy-Practice Contradiction: Case of Cloud Computing Adoption in the
Malawi Health Sector
|
In proceedings of the 1st Virtual Conference on Implications of
Information and Digital Technologies for Development, 2021
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper examines the dynamics of policy implementation and how policy
contradicts reality on the ground when it comes to practice. The paper finds
that despite having well-laid out policy; the actual practice is contrary.
Taking data storage policy within the Ministry of Health in Malawi as a case
study, the paper highlights that the contextual realities of where Ministry of
Health (MoH) data is stored depends on a number of
Technology-Organizational-Environmental (TOE) factors. In the wake of cloud
computing; some of these factors act as causative factors for data to be stored
in the cloud; contradicting the data storage policy.
|
[
{
"created": "Mon, 23 Aug 2021 05:59:46 GMT",
"version": "v1"
}
] |
2021-08-24
|
[
[
"Phiri",
"Deborah Amos",
""
],
[
"Kanjo",
"Chipo",
""
]
] |
This paper examines the dynamics of policy implementation and how policy contradicts reality on the ground when it comes to practice. The paper finds that despite having well-laid out policy; the actual practice is contrary. Taking data storage policy within the Ministry of Health in Malawi as a case study, the paper highlights that the contextual realities of where Ministry of Health (MoH) data is stored depends on a number of Technology-Organizational-Environmental (TOE) factors. In the wake of cloud computing; some of these factors act as causative factors for data to be stored in the cloud; contradicting the data storage policy.
|
1403.5468
|
Jian-Jun Shu
|
Jian-Jun Shu and Qi-Wen Wang
|
Beyond Parrondo's paradox
| null |
Scientific Reports, Vol. 4, No. 4244, pp. 1-9, 2014
|
10.1038/srep04244
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Parrondo's paradox is a counterintuitive phenomenon where
individually-losing strategies can be combined in producing a winning
expectation. In this paper, the issues surrounding the Parrondo's paradox are
investigated. The focus is lying on testifying whether the same paradoxical
effect can be reproduced by using a simple capital dependent game. The
paradoxical effect generated by the Parrondo's paradox can be explained by
placing all the parameters in one probability space. Based on this framework,
it is able to generate other possible paradoxical effects by manipulating the
parameters in the probability space.
|
[
{
"created": "Fri, 21 Mar 2014 14:10:46 GMT",
"version": "v1"
}
] |
2014-03-24
|
[
[
"Shu",
"Jian-Jun",
""
],
[
"Wang",
"Qi-Wen",
""
]
] |
The Parrondo's paradox is a counterintuitive phenomenon where individually-losing strategies can be combined in producing a winning expectation. In this paper, the issues surrounding the Parrondo's paradox are investigated. The focus is lying on testifying whether the same paradoxical effect can be reproduced by using a simple capital dependent game. The paradoxical effect generated by the Parrondo's paradox can be explained by placing all the parameters in one probability space. Based on this framework, it is able to generate other possible paradoxical effects by manipulating the parameters in the probability space.
|
1810.03120
|
Andrzej Pelc
|
Andrzej Pelc, Ram Narayan Yadav
|
Using Time to Break Symmetry: Universal Deterministic Anonymous
Rendezvous
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two anonymous mobile agents navigate synchronously in an anonymous graph and
have to meet at a node, using a deterministic algorithm. This is a symmetry
breaking task called rendezvous, equivalent to the fundamental task of leader
election between the agents. When is this feasible in a completely anonymous
environment? It is known that agents can always meet if their initial positions
are nonsymmetric, and that if they are symmetric and agents start
simultaneously then rendezvous is impossible. What happens for symmetric
initial positions with non-simultaneous start? Can symmetry between the agents
be broken by the delay between their starting times?
In order to answer these questions, we consider {\em space-time initial
configurations} (abbreviated by STIC). A STIC is formalized as
$[(u,v),\delta]$, where $u$ and $v$ are initial nodes of the agents in some
graph and $\delta$ is a non-negative integer that represents the difference
between their starting times. A STIC is {\em feasible} if there exists a
deterministic algorithm, even dedicated to this particular STIC, which
accomplishes rendezvous for it. Our main result is a characterization of all
feasible STICs and the design of a universal deterministic algorithm that
accomplishes rendezvous for all of them without {\em any } a priori knowledge
of the agents. Thus, as far as feasibility is concerned, we completely solve
the problem of symmetry breaking between two anonymous agents in anonymous
graphs. Moreover, we show that such a universal algorithm cannot work for all
feasible STICs in time polynomial in the initial distance between the agents.
|
[
{
"created": "Sun, 7 Oct 2018 11:16:59 GMT",
"version": "v1"
}
] |
2018-10-09
|
[
[
"Pelc",
"Andrzej",
""
],
[
"Yadav",
"Ram Narayan",
""
]
] |
Two anonymous mobile agents navigate synchronously in an anonymous graph and have to meet at a node, using a deterministic algorithm. This is a symmetry breaking task called rendezvous, equivalent to the fundamental task of leader election between the agents. When is this feasible in a completely anonymous environment? It is known that agents can always meet if their initial positions are nonsymmetric, and that if they are symmetric and agents start simultaneously then rendezvous is impossible. What happens for symmetric initial positions with non-simultaneous start? Can symmetry between the agents be broken by the delay between their starting times? In order to answer these questions, we consider {\em space-time initial configurations} (abbreviated by STIC). A STIC is formalized as $[(u,v),\delta]$, where $u$ and $v$ are initial nodes of the agents in some graph and $\delta$ is a non-negative integer that represents the difference between their starting times. A STIC is {\em feasible} if there exists a deterministic algorithm, even dedicated to this particular STIC, which accomplishes rendezvous for it. Our main result is a characterization of all feasible STICs and the design of a universal deterministic algorithm that accomplishes rendezvous for all of them without {\em any } a priori knowledge of the agents. Thus, as far as feasibility is concerned, we completely solve the problem of symmetry breaking between two anonymous agents in anonymous graphs. Moreover, we show that such a universal algorithm cannot work for all feasible STICs in time polynomial in the initial distance between the agents.
|
2405.03724
|
Junxiang Wang
|
Junxiang Wang and Liang Zhao
|
GraphSL: An Open-Source Library for Graph Source Localization Approaches
and Benchmark Datasets
| null | null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce GraphSL, a new library for studying the graph source
localization problem. graph diffusion and graph source localization are inverse
problems in nature: graph diffusion predicts information diffusions from
information sources, while graph source localization predicts information
sources from information diffusions. GraphSL facilitates the exploration of
various graph diffusion models for simulating information diffusions and
enables the evaluation of cutting-edge source localization approaches on
established benchmark datasets. The source code of GraphSL is made available at
Github Repository (https://github.com/xianggebenben/GraphSL). Bug reports and
feedback can be directed to the Github issues page
(https://github.com/xianggebenben/GraphSL/issues).
|
[
{
"created": "Mon, 6 May 2024 04:00:00 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Jul 2024 17:34:22 GMT",
"version": "v2"
}
] |
2024-07-30
|
[
[
"Wang",
"Junxiang",
""
],
[
"Zhao",
"Liang",
""
]
] |
We introduce GraphSL, a new library for studying the graph source localization problem. graph diffusion and graph source localization are inverse problems in nature: graph diffusion predicts information diffusions from information sources, while graph source localization predicts information sources from information diffusions. GraphSL facilitates the exploration of various graph diffusion models for simulating information diffusions and enables the evaluation of cutting-edge source localization approaches on established benchmark datasets. The source code of GraphSL is made available at Github Repository (https://github.com/xianggebenben/GraphSL). Bug reports and feedback can be directed to the Github issues page (https://github.com/xianggebenben/GraphSL/issues).
|
1612.03849
|
Gabriele Oliva
|
Gabriele Oliva, Andrea Gasparri, Adriano Fagiolini, and Christoforos
N. Hadjicostis
|
Distributed and Proximity-Constrained C-Means for Discrete Coverage
Control
|
To appear in the 56th IEEE Conference on Decision and Control, to be
held in Melbourne, Australia, December 12-15, 2017
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a novel distributed coverage control framework for a
network of mobile agents, in charge of covering a finite set of points of
interest (PoI), such as people in danger, geographically dispersed equipment or
environmental landmarks. The proposed algorithm is inspired by C-Means, an
unsupervised learning algorithm originally proposed for non-exclusive
clustering and for identification of cluster centroids from a set of
observations. To cope with the agents' limited sensing range and avoid
infeasible coverage solutions, traditional C-Means needs to be enhanced with
proximity constraints, ensuring that each agent takes into account only
neighboring PoIs. The proposed coverage control framework provides useful
information concerning the ranking or importance of the different PoIs to the
agents, which can be exploited in further application-dependent data fusion
processes, patrolling, or disaster relief applications.
|
[
{
"created": "Mon, 12 Dec 2016 18:54:07 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2017 14:35:51 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Sep 2017 09:30:10 GMT",
"version": "v3"
},
{
"created": "Sat, 16 Sep 2017 18:38:16 GMT",
"version": "v4"
}
] |
2017-09-19
|
[
[
"Oliva",
"Gabriele",
""
],
[
"Gasparri",
"Andrea",
""
],
[
"Fagiolini",
"Adriano",
""
],
[
"Hadjicostis",
"Christoforos N.",
""
]
] |
In this paper we present a novel distributed coverage control framework for a network of mobile agents, in charge of covering a finite set of points of interest (PoI), such as people in danger, geographically dispersed equipment or environmental landmarks. The proposed algorithm is inspired by C-Means, an unsupervised learning algorithm originally proposed for non-exclusive clustering and for identification of cluster centroids from a set of observations. To cope with the agents' limited sensing range and avoid infeasible coverage solutions, traditional C-Means needs to be enhanced with proximity constraints, ensuring that each agent takes into account only neighboring PoIs. The proposed coverage control framework provides useful information concerning the ranking or importance of the different PoIs to the agents, which can be exploited in further application-dependent data fusion processes, patrolling, or disaster relief applications.
|
1502.04662
|
Tim Althoff
|
Tim Althoff, Xin Luna Dong, Kevin Murphy, Safa Alai, Van Dang, Wei
Zhang
|
TimeMachine: Timeline Generation for Knowledge-Base Entities
|
To appear at ACM SIGKDD KDD'15. 12pp, 7 fig. With appendix. Demo and
other info available at http://cs.stanford.edu/~althoff/timemachine/
| null |
10.1145/2783258.2783325
| null |
cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method called TIMEMACHINE to generate a timeline of events and
relations for entities in a knowledge base. For example for an actor, such a
timeline should show the most important professional and personal milestones
and relationships such as works, awards, collaborations, and family
relationships. We develop three orthogonal timeline quality criteria that an
ideal timeline should satisfy: (1) it shows events that are relevant to the
entity; (2) it shows events that are temporally diverse, so they distribute
along the time axis, avoiding visual crowding and allowing for easy user
interaction, such as zooming in and out; and (3) it shows events that are
content diverse, so they contain many different types of events (e.g., for an
actor, it should show movies and marriages and awards, not just movies). We
present an algorithm to generate such timelines for a given time period and
screen size, based on submodular optimization and web-co-occurrence statistics
with provable performance guarantees. A series of user studies using Mechanical
Turk shows that all three quality criteria are crucial to produce quality
timelines and that our algorithm significantly outperforms various baseline and
state-of-the-art methods.
|
[
{
"created": "Mon, 16 Feb 2015 18:53:01 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Feb 2015 07:02:11 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jun 2015 20:39:26 GMT",
"version": "v3"
}
] |
2015-06-10
|
[
[
"Althoff",
"Tim",
""
],
[
"Dong",
"Xin Luna",
""
],
[
"Murphy",
"Kevin",
""
],
[
"Alai",
"Safa",
""
],
[
"Dang",
"Van",
""
],
[
"Zhang",
"Wei",
""
]
] |
We present a method called TIMEMACHINE to generate a timeline of events and relations for entities in a knowledge base. For example for an actor, such a timeline should show the most important professional and personal milestones and relationships such as works, awards, collaborations, and family relationships. We develop three orthogonal timeline quality criteria that an ideal timeline should satisfy: (1) it shows events that are relevant to the entity; (2) it shows events that are temporally diverse, so they distribute along the time axis, avoiding visual crowding and allowing for easy user interaction, such as zooming in and out; and (3) it shows events that are content diverse, so they contain many different types of events (e.g., for an actor, it should show movies and marriages and awards, not just movies). We present an algorithm to generate such timelines for a given time period and screen size, based on submodular optimization and web-co-occurrence statistics with provable performance guarantees. A series of user studies using Mechanical Turk shows that all three quality criteria are crucial to produce quality timelines and that our algorithm significantly outperforms various baseline and state-of-the-art methods.
|
2107.03140
|
Bahram Sadeghi Bigham
|
Bahram Sadeghi Bigham
|
Minimum Constraint Removal Problem for Line Segments is NP-hard
| null |
Bigham, Bahram Sadeghi. "Minimum constraint removal problem for
line segments is NP-hard." Discrete Mathematics, Algorithms and Applications
(2022): 2250055
| null | null |
cs.CG cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the minimum constraint removal ($MCR$), there is no feasible path to move
from the starting point towards the goal and, the minimum constraints should be
removed in order to find a collision-free path. It has been proved that $MCR$
problem is $NP-hard$ when constraints have arbitrary shapes or even they are in
shape of convex polygons. However, it has a simple linear solution when
constraints are lines and the problem is open for other cases yet. In this
paper, using a reduction from Subset Sum problem, in three steps, we show that
the problem is NP-hard for both weighted and unweighted line segments.
|
[
{
"created": "Wed, 7 Jul 2021 10:57:22 GMT",
"version": "v1"
}
] |
2023-02-21
|
[
[
"Bigham",
"Bahram Sadeghi",
""
]
] |
In the minimum constraint removal ($MCR$), there is no feasible path to move from the starting point towards the goal and, the minimum constraints should be removed in order to find a collision-free path. It has been proved that $MCR$ problem is $NP-hard$ when constraints have arbitrary shapes or even they are in shape of convex polygons. However, it has a simple linear solution when constraints are lines and the problem is open for other cases yet. In this paper, using a reduction from Subset Sum problem, in three steps, we show that the problem is NP-hard for both weighted and unweighted line segments.
|
1904.05272
|
Tang Liu
|
Tang Liu and Daniela Tuninetti
|
Decentralized Pliable Index Coding
|
5 pages. To be presented at ISIT 2019
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the ${\it decentralized}$ Pliable Index CODing (PICOD)
problem: a variant of the Index Coding (IC) problem, where a central
transmitter serves ${\it pliable}$ users with message side information; here,
pliable refers to the fact that a user is satisfied by decoding ${\it any}$ $t$
messages that are not in its side information set. In the decentralized PICOD,
a central transmitter with knowledge of all messages is not present, and
instead users share among themselves massages that can only depend on their
local side information set. This paper characterizes the capacity of two
classes of decentralized complete--$S$ PICOD$(t)$ problems with $m$ messages
(where the set $S\subset[m]$ contains the sizes of the side information sets,
and the number of users is $n=\sum_{s\in S}\binom{m}{s}$, with no two users
having the same side information set): (i) the consecutive case:
$S=[s_\min:s_\max]$ for some $0 \leq s_\min\leq s_\max \leq m-t$, and (ii) the
complement-consecutive case: $S=[0:m-t]\backslash[s_\min:s_\max]$, for some $0
< s_\min\leq s_\max < m-t$. Interestingly, the optimal code-length for the
decentralized PICOD in those cases is the same as for the classical
(centralized) PICOD counterpart, except when the problem is no longer pliable,
that is, it reduces to an IC problem where every user needs to decode all
messages not in its side information set. Although the optimal code-length may
be the same in both centralized and decentralized settings, the actual optimal
codes are not. For the decentralized PICOD, sparse Maximum Distance Separable
(MDS) codes and vector linear index codes are used (as opposed to scalar linear
codes).
|
[
{
"created": "Wed, 10 Apr 2019 16:18:57 GMT",
"version": "v1"
}
] |
2019-04-11
|
[
[
"Liu",
"Tang",
""
],
[
"Tuninetti",
"Daniela",
""
]
] |
This paper introduces the ${\it decentralized}$ Pliable Index CODing (PICOD) problem: a variant of the Index Coding (IC) problem, where a central transmitter serves ${\it pliable}$ users with message side information; here, pliable refers to the fact that a user is satisfied by decoding ${\it any}$ $t$ messages that are not in its side information set. In the decentralized PICOD, a central transmitter with knowledge of all messages is not present, and instead users share among themselves massages that can only depend on their local side information set. This paper characterizes the capacity of two classes of decentralized complete--$S$ PICOD$(t)$ problems with $m$ messages (where the set $S\subset[m]$ contains the sizes of the side information sets, and the number of users is $n=\sum_{s\in S}\binom{m}{s}$, with no two users having the same side information set): (i) the consecutive case: $S=[s_\min:s_\max]$ for some $0 \leq s_\min\leq s_\max \leq m-t$, and (ii) the complement-consecutive case: $S=[0:m-t]\backslash[s_\min:s_\max]$, for some $0 < s_\min\leq s_\max < m-t$. Interestingly, the optimal code-length for the decentralized PICOD in those cases is the same as for the classical (centralized) PICOD counterpart, except when the problem is no longer pliable, that is, it reduces to an IC problem where every user needs to decode all messages not in its side information set. Although the optimal code-length may be the same in both centralized and decentralized settings, the actual optimal codes are not. For the decentralized PICOD, sparse Maximum Distance Separable (MDS) codes and vector linear index codes are used (as opposed to scalar linear codes).
|
2309.01398
|
Danqing Hu
|
Danqing Hu, Bing Liu, Xiaofeng Zhu, Xudong Lu, Nan Wu
|
Zero-shot information extraction from radiological reports using ChatGPT
| null | null |
10.1016/j.ijmedinf.2023.105321
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electronic health records contain an enormous amount of valuable information,
but many are recorded in free text. Information extraction is the strategy to
transform the sequence of characters into structured data, which can be
employed for secondary analysis. However, the traditional information
extraction components, such as named entity recognition and relation
extraction, require annotated data to optimize the model parameters, which has
become one of the major bottlenecks in building information extraction systems.
With the large language models achieving good performances on various
downstream NLP tasks without parameter tuning, it becomes possible to use large
language models for zero-shot information extraction. In this study, we aim to
explore whether the most popular large language model, ChatGPT, can extract
useful information from the radiological reports. We first design the prompt
template for the interested information in the CT reports. Then, we generate
the prompts by combining the prompt template with the CT reports as the inputs
of ChatGPT to obtain the responses. A post-processing module is developed to
transform the responses into structured extraction results. We conducted the
experiments with 847 CT reports collected from Peking University Cancer
Hospital. The experimental results indicate that ChatGPT can achieve
competitive performances for some extraction tasks compared with the baseline
information extraction system, but some limitations need to be further
improved.
|
[
{
"created": "Mon, 4 Sep 2023 07:00:26 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Sep 2023 01:36:08 GMT",
"version": "v2"
}
] |
2024-01-03
|
[
[
"Hu",
"Danqing",
""
],
[
"Liu",
"Bing",
""
],
[
"Zhu",
"Xiaofeng",
""
],
[
"Lu",
"Xudong",
""
],
[
"Wu",
"Nan",
""
]
] |
Electronic health records contain an enormous amount of valuable information, but many are recorded in free text. Information extraction is the strategy to transform the sequence of characters into structured data, which can be employed for secondary analysis. However, the traditional information extraction components, such as named entity recognition and relation extraction, require annotated data to optimize the model parameters, which has become one of the major bottlenecks in building information extraction systems. With the large language models achieving good performances on various downstream NLP tasks without parameter tuning, it becomes possible to use large language models for zero-shot information extraction. In this study, we aim to explore whether the most popular large language model, ChatGPT, can extract useful information from the radiological reports. We first design the prompt template for the interested information in the CT reports. Then, we generate the prompts by combining the prompt template with the CT reports as the inputs of ChatGPT to obtain the responses. A post-processing module is developed to transform the responses into structured extraction results. We conducted the experiments with 847 CT reports collected from Peking University Cancer Hospital. The experimental results indicate that ChatGPT can achieve competitive performances for some extraction tasks compared with the baseline information extraction system, but some limitations need to be further improved.
|
2402.03413
|
Huiyu Duan
|
Xiongkuo Min, Huiyu Duan, Wei Sun, Yucheng Zhu, Guangtao Zhai
|
Perceptual Video Quality Assessment: A Survey
| null | null | null | null |
cs.MM cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perceptual video quality assessment plays a vital role in the field of video
processing due to the existence of quality degradations introduced in various
stages of video signal acquisition, compression, transmission and display. With
the advancement of internet communication and cloud service technology, video
content and traffic are growing exponentially, which further emphasizes the
requirement for accurate and rapid assessment of video quality. Therefore,
numerous subjective and objective video quality assessment studies have been
conducted over the past two decades for both generic videos and specific videos
such as streaming, user-generated content (UGC), 3D, virtual and augmented
reality (VR and AR), high frame rate (HFR), audio-visual, etc. This survey
provides an up-to-date and comprehensive review of these video quality
assessment studies. Specifically, we first review the subjective video quality
assessment methodologies and databases, which are necessary for validating the
performance of video quality metrics. Second, the objective video quality
assessment algorithms for general purposes are surveyed and concluded according
to the methodologies utilized in the quality measures. Third, we overview the
objective video quality assessment measures for specific applications and
emerging topics. Finally, the performances of the state-of-the-art video
quality assessment measures are compared and analyzed. This survey provides a
systematic overview of both classical works and recent progresses in the realm
of video quality assessment, which can help other researchers quickly access
the field and conduct relevant research.
|
[
{
"created": "Mon, 5 Feb 2024 16:13:52 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Min",
"Xiongkuo",
""
],
[
"Duan",
"Huiyu",
""
],
[
"Sun",
"Wei",
""
],
[
"Zhu",
"Yucheng",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
Perceptual video quality assessment plays a vital role in the field of video processing due to the existence of quality degradations introduced in various stages of video signal acquisition, compression, transmission and display. With the advancement of internet communication and cloud service technology, video content and traffic are growing exponentially, which further emphasizes the requirement for accurate and rapid assessment of video quality. Therefore, numerous subjective and objective video quality assessment studies have been conducted over the past two decades for both generic videos and specific videos such as streaming, user-generated content (UGC), 3D, virtual and augmented reality (VR and AR), high frame rate (HFR), audio-visual, etc. This survey provides an up-to-date and comprehensive review of these video quality assessment studies. Specifically, we first review the subjective video quality assessment methodologies and databases, which are necessary for validating the performance of video quality metrics. Second, the objective video quality assessment algorithms for general purposes are surveyed and concluded according to the methodologies utilized in the quality measures. Third, we overview the objective video quality assessment measures for specific applications and emerging topics. Finally, the performances of the state-of-the-art video quality assessment measures are compared and analyzed. This survey provides a systematic overview of both classical works and recent progresses in the realm of video quality assessment, which can help other researchers quickly access the field and conduct relevant research.
|
1605.06417
|
Yuan Jiang
|
Wei Shen, Yuan Jiang, Wenjing Gao, Dan Zeng, Xinggang Wang
|
Shape Recognition by Bag of Skeleton-associated Contour Parts
|
10 pages. Has been Accepted by Pattern Recognition Letters 2016
| null |
10.1007/978-3-662-45646-0_40
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Contour and skeleton are two complementary representations for shape
recognition. However combining them in a principal way is nontrivial, as they
are generally abstracted by different structures (closed string vs graph),
respectively. This paper aims at addressing the shape recognition problem by
combining contour and skeleton according to the correspondence between them.
The correspondence provides a straightforward way to associate skeletal
information with a shape contour. More specifically, we propose a new shape
descriptor. named Skeleton-associated Shape Context (SSC), which captures the
features of a contour fragment associated with skeletal information. Benefited
from the association, the proposed shape descriptor provides the complementary
geometric information from both contour and skeleton parts, including the
spatial distribution and the thickness change along the shape part. To form a
meaningful shape feature vector for an overall shape, the Bag of Features
framework is applied to the SSC descriptors extracted from it. Finally, the
shape feature vector is fed into a linear SVM classifier to recognize the
shape. The encouraging experimental results demonstrate that the proposed way
to combine contour and skeleton is effective for shape recognition, which
achieves the state-of-the-art performances on several standard shape
benchmarks.
|
[
{
"created": "Fri, 20 May 2016 16:07:41 GMT",
"version": "v1"
}
] |
2016-05-23
|
[
[
"Shen",
"Wei",
""
],
[
"Jiang",
"Yuan",
""
],
[
"Gao",
"Wenjing",
""
],
[
"Zeng",
"Dan",
""
],
[
"Wang",
"Xinggang",
""
]
] |
Contour and skeleton are two complementary representations for shape recognition. However combining them in a principal way is nontrivial, as they are generally abstracted by different structures (closed string vs graph), respectively. This paper aims at addressing the shape recognition problem by combining contour and skeleton according to the correspondence between them. The correspondence provides a straightforward way to associate skeletal information with a shape contour. More specifically, we propose a new shape descriptor. named Skeleton-associated Shape Context (SSC), which captures the features of a contour fragment associated with skeletal information. Benefited from the association, the proposed shape descriptor provides the complementary geometric information from both contour and skeleton parts, including the spatial distribution and the thickness change along the shape part. To form a meaningful shape feature vector for an overall shape, the Bag of Features framework is applied to the SSC descriptors extracted from it. Finally, the shape feature vector is fed into a linear SVM classifier to recognize the shape. The encouraging experimental results demonstrate that the proposed way to combine contour and skeleton is effective for shape recognition, which achieves the state-of-the-art performances on several standard shape benchmarks.
|
2306.00809
|
Emanuele Francazi
|
Emanuele Francazi, Aurelien Lucchi, Marco Baity-Jesi
|
Initial Guessing Bias: How Untrained Networks Favor Some Classes
|
We have added experiments on pre-trained models and various new
results, including analysis in the limit of an infinite number of classes and
an extension of the analysis to non-identically distributed classes.
Additionally, we have slightly restructured the main paper to include more
discussion on the practical implications of the phenomenon
| null | null | null |
cs.LG cond-mat.dis-nn stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding and controlling biasing effects in neural networks is crucial
for ensuring accurate and fair model performance. In the context of
classification problems, we provide a theoretical analysis demonstrating that
the structure of a deep neural network (DNN) can condition the model to assign
all predictions to the same class, even before the beginning of training, and
in the absence of explicit biases. We prove that, besides dataset properties,
the presence of this phenomenon, which we call \textit{Initial Guessing Bias}
(IGB), is influenced by model choices including dataset preprocessing methods,
and architectural decisions, such as activation functions, max-pooling layers,
and network depth. Our analysis of IGB provides information for architecture
selection and model initialization. We also highlight theoretical consequences,
such as the breakdown of node-permutation symmetry, the violation of
self-averaging and the non-trivial effects that depth has on the phenomenon.
|
[
{
"created": "Thu, 1 Jun 2023 15:37:32 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Nov 2023 16:17:43 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Feb 2024 12:48:53 GMT",
"version": "v3"
},
{
"created": "Thu, 13 Jun 2024 22:30:36 GMT",
"version": "v4"
}
] |
2024-06-17
|
[
[
"Francazi",
"Emanuele",
""
],
[
"Lucchi",
"Aurelien",
""
],
[
"Baity-Jesi",
"Marco",
""
]
] |
Understanding and controlling biasing effects in neural networks is crucial for ensuring accurate and fair model performance. In the context of classification problems, we provide a theoretical analysis demonstrating that the structure of a deep neural network (DNN) can condition the model to assign all predictions to the same class, even before the beginning of training, and in the absence of explicit biases. We prove that, besides dataset properties, the presence of this phenomenon, which we call \textit{Initial Guessing Bias} (IGB), is influenced by model choices including dataset preprocessing methods, and architectural decisions, such as activation functions, max-pooling layers, and network depth. Our analysis of IGB provides information for architecture selection and model initialization. We also highlight theoretical consequences, such as the breakdown of node-permutation symmetry, the violation of self-averaging and the non-trivial effects that depth has on the phenomenon.
|
1509.05636
|
Seetha Ramaiah M
|
M. Seetha Ramaiah, Amitabha Mukerjee, Arindam Chakraborty, Sadbodh
Sharma
|
Visual Generalized Coordinates
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An open problem in robotics is that of using vision to identify a robot's own
body and the world around it. Many models attempt to recover the traditional
C-space parameters. Instead, we propose an alternative C-space by deriving
generalized coordinates from $n$ images of the robot. We show that the space of
such images is bijective to the motion space, so these images lie on a manifold
$\mathcal{V}$ homeomorphic to the canonical C-space. We now approximate this
manifold as a set of $n$ neighbourhood tangent spaces that result in a graph,
which we call the Visual Roadmap (VRM). Given a new robot image, we perform
inverse kinematics visually by interpolating between nearby images in the image
space. Obstacles are projected onto the VRM in $O(n)$ time by superimposition
of images, leading to the identification of collision poses. The edges joining
the free nodes can now be checked with a visual local planner, and free-space
motions computed in $O(nlogn)$ time. This enables us to plan paths in the image
space for a robot manipulator with unknown link geometries, DOF, kinematics,
obstacles, and camera pose. We sketch the proofs for the main theoretical
ideas, identify the assumptions, and demonstrate the approach for both
articulated and mobile robots. We also investigate the feasibility of the
process by investigating various metrics and image sampling densities, and
demonstrate it on simulated and real robots.
|
[
{
"created": "Fri, 18 Sep 2015 14:17:57 GMT",
"version": "v1"
}
] |
2015-09-21
|
[
[
"Ramaiah",
"M. Seetha",
""
],
[
"Mukerjee",
"Amitabha",
""
],
[
"Chakraborty",
"Arindam",
""
],
[
"Sharma",
"Sadbodh",
""
]
] |
An open problem in robotics is that of using vision to identify a robot's own body and the world around it. Many models attempt to recover the traditional C-space parameters. Instead, we propose an alternative C-space by deriving generalized coordinates from $n$ images of the robot. We show that the space of such images is bijective to the motion space, so these images lie on a manifold $\mathcal{V}$ homeomorphic to the canonical C-space. We now approximate this manifold as a set of $n$ neighbourhood tangent spaces that result in a graph, which we call the Visual Roadmap (VRM). Given a new robot image, we perform inverse kinematics visually by interpolating between nearby images in the image space. Obstacles are projected onto the VRM in $O(n)$ time by superimposition of images, leading to the identification of collision poses. The edges joining the free nodes can now be checked with a visual local planner, and free-space motions computed in $O(nlogn)$ time. This enables us to plan paths in the image space for a robot manipulator with unknown link geometries, DOF, kinematics, obstacles, and camera pose. We sketch the proofs for the main theoretical ideas, identify the assumptions, and demonstrate the approach for both articulated and mobile robots. We also investigate the feasibility of the process by investigating various metrics and image sampling densities, and demonstrate it on simulated and real robots.
|
1102.0454
|
Arnau Ramisa
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo,
Ramon Lopez de Mantaras
|
Evaluation of Three Vision Based Object Perception Methods for a Mobile
Robot
|
37 pages, 11 figures
| null | null |
IIIA research report 2011-01
|
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses object perception applied to mobile robotics. Being able
to perceive semantically meaningful objects in unstructured environments is a
key capability in order to make robots suitable to perform high-level tasks in
home environments. However, finding a solution for this task is daunting: it
requires the ability to handle the variability in image formation in a moving
camera with tight time constraints. The paper brings to attention some of the
issues with applying three state of the art object recognition and detection
methods in a mobile robotics scenario, and proposes methods to deal with
windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art
in object perception in an attempt to develop a lightweight solution for mobile
robotics use/research in typical indoor settings.
|
[
{
"created": "Wed, 2 Feb 2011 15:00:09 GMT",
"version": "v1"
}
] |
2015-03-18
|
[
[
"Ramisa",
"Arnau",
""
],
[
"Aldavert",
"David",
""
],
[
"Vasudevan",
"Shrihari",
""
],
[
"Toledo",
"Ricardo",
""
],
[
"de Mantaras",
"Ramon Lopez",
""
]
] |
This paper addresses object perception applied to mobile robotics. Being able to perceive semantically meaningful objects in unstructured environments is a key capability in order to make robots suitable to perform high-level tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.
|
1607.06042
|
Vaneet Aggarwal
|
Mehdi Ashraphijuo and Vaneet Aggarwal and Xiaodong Wang
|
The DoF of Two-way Butterfly Networks
| null |
IEEE Communications Letters, Volume: 21, Issue: 10, Oct. 2017
|
10.1109/LCOMM.2017.2723364
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the two-way butterfly network, a class of two-way
four-unicast networks. We first show that bidirectional links do not increase
the degrees of freedom for this network thus giving the first example for
networks, to the best of our knowledge, where bidirectional links do not
increase the degrees of freedom. Further, we see that sufficient caching at the
relays or increasing the number of antennas in the relays can double the
two-way degrees of freedom for butterfly network.
|
[
{
"created": "Wed, 20 Jul 2016 17:44:49 GMT",
"version": "v1"
}
] |
2017-11-03
|
[
[
"Ashraphijuo",
"Mehdi",
""
],
[
"Aggarwal",
"Vaneet",
""
],
[
"Wang",
"Xiaodong",
""
]
] |
This paper studies the two-way butterfly network, a class of two-way four-unicast networks. We first show that bidirectional links do not increase the degrees of freedom for this network thus giving the first example for networks, to the best of our knowledge, where bidirectional links do not increase the degrees of freedom. Further, we see that sufficient caching at the relays or increasing the number of antennas in the relays can double the two-way degrees of freedom for butterfly network.
|
1605.03518
|
Yang Huang
|
Yang Huang and Bruno Clerckx
|
Relaying Strategies for Wireless-Powered MIMO Relay Networks
|
Submitted for possible journal publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates relaying schemes in an amplify-and-forward
multiple-input multiple-output relay network, where an energy-constrained relay
harvests wireless power from the source information flow and can be further
aided by an energy flow (EF) in the form of a wireless power transfer at the
destination. However, the joint optimization of the relay matrix and the source
precoder for the energy-flow-assisted (EFA) and the non-EFA (NEFA) schemes is
intractable. The original rate maximization problem is transformed into an
equivalent weighted mean square error minimization problem and optimized
iteratively, where the global optimum of the nonconvex source precoder
subproblem is achieved by semidefinite relaxation and rank reduction. The
iterative algorithm finally converges. Then, the simplified EFA and NEFA
schemes are proposed based on channel diagonalization, such that the matrices
optimizations can be simplified to power optimizations. Closed-form solutions
can be achieved. Simulation results reveal that the EFA schemes can outperform
the NEFA schemes. Additionally, deploying more antennas at the relay increases
the dimension of the signal space at the relay. Exploiting the additional
dimension, the EF leakage in the information detecting block can be nearly
separated from the information signal, such that the EF leakage can be
amplified with a small coefficient.
|
[
{
"created": "Wed, 11 May 2016 17:19:45 GMT",
"version": "v1"
}
] |
2016-05-12
|
[
[
"Huang",
"Yang",
""
],
[
"Clerckx",
"Bruno",
""
]
] |
This paper investigates relaying schemes in an amplify-and-forward multiple-input multiple-output relay network, where an energy-constrained relay harvests wireless power from the source information flow and can be further aided by an energy flow (EF) in the form of a wireless power transfer at the destination. However, the joint optimization of the relay matrix and the source precoder for the energy-flow-assisted (EFA) and the non-EFA (NEFA) schemes is intractable. The original rate maximization problem is transformed into an equivalent weighted mean square error minimization problem and optimized iteratively, where the global optimum of the nonconvex source precoder subproblem is achieved by semidefinite relaxation and rank reduction. The iterative algorithm finally converges. Then, the simplified EFA and NEFA schemes are proposed based on channel diagonalization, such that the matrices optimizations can be simplified to power optimizations. Closed-form solutions can be achieved. Simulation results reveal that the EFA schemes can outperform the NEFA schemes. Additionally, deploying more antennas at the relay increases the dimension of the signal space at the relay. Exploiting the additional dimension, the EF leakage in the information detecting block can be nearly separated from the information signal, such that the EF leakage can be amplified with a small coefficient.
|
2402.15769
|
Zeming Dong
|
Zeming Dong, Qiang Hu, Xiaofei Xie, Maxime Cordy, Mike Papadakis,
Jianjun Zhao
|
Importance Guided Data Augmentation for Neural-Based Code Understanding
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained code models lead the era of code intelligence. Many models have
been designed with impressive performance recently. However, one important
problem, data augmentation for code data that automatically helps developers
prepare training data lacks study in the field of code learning. In this paper,
we introduce a general data augmentation framework, GenCode, to enhance the
training of code understanding models. GenCode follows a
generation-and-selection paradigm to prepare useful training codes.
Specifically, it uses code transformation techniques to generate new code
candidates first and then selects important ones as the training data by
importance metrics. To evaluate the effectiveness of GenCode with a general
importance metric -- loss value, we conduct experiments on four code
understanding tasks (e.g., code clone detection) and three pre-trained code
models (e.g., CodeT5). Compared to the state-of-the-art (SOTA) code
augmentation method, MixCode, GenCode produces code models with 2.92% higher
accuracy and 4.90% robustness on average.
|
[
{
"created": "Sat, 24 Feb 2024 08:57:12 GMT",
"version": "v1"
}
] |
2024-02-27
|
[
[
"Dong",
"Zeming",
""
],
[
"Hu",
"Qiang",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Cordy",
"Maxime",
""
],
[
"Papadakis",
"Mike",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
Pre-trained code models lead the era of code intelligence. Many models have been designed with impressive performance recently. However, one important problem, data augmentation for code data that automatically helps developers prepare training data lacks study in the field of code learning. In this paper, we introduce a general data augmentation framework, GenCode, to enhance the training of code understanding models. GenCode follows a generation-and-selection paradigm to prepare useful training codes. Specifically, it uses code transformation techniques to generate new code candidates first and then selects important ones as the training data by importance metrics. To evaluate the effectiveness of GenCode with a general importance metric -- loss value, we conduct experiments on four code understanding tasks (e.g., code clone detection) and three pre-trained code models (e.g., CodeT5). Compared to the state-of-the-art (SOTA) code augmentation method, MixCode, GenCode produces code models with 2.92% higher accuracy and 4.90% robustness on average.
|
1807.00602
|
Evgeniy Gryaznov
|
Evgeniy Gryaznov
|
Semantic Query Language for Temporal Genealogical Trees
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Computers play a crucial role in modern ancestry management, they are used to
collect, store, analyze, sort and display genealogical data. However, current
applications do not take into account the kinship structure of a natural
language.
In this paper we propose a new domain-specific language KISP which is based
on a formalization of English kinship system, for accessing and querying
traditional genealogical trees. KISP is a dynamically typed LISP-like
programming language with a rich set of features, such as kinship term
reduction and temporal information expression.
Our solution provides a user with a coherent genealogical framework that
allows for a natural navigation over any traditional family tree.
|
[
{
"created": "Mon, 2 Jul 2018 11:27:51 GMT",
"version": "v1"
}
] |
2018-07-03
|
[
[
"Gryaznov",
"Evgeniy",
""
]
] |
Computers play a crucial role in modern ancestry management, they are used to collect, store, analyze, sort and display genealogical data. However, current applications do not take into account the kinship structure of a natural language. In this paper we propose a new domain-specific language KISP which is based on a formalization of English kinship system, for accessing and querying traditional genealogical trees. KISP is a dynamically typed LISP-like programming language with a rich set of features, such as kinship term reduction and temporal information expression. Our solution provides a user with a coherent genealogical framework that allows for a natural navigation over any traditional family tree.
|
2202.08870
|
Pat Morin
|
Prosenjit Bose, Pat Morin, and Saeed Odak
|
An Optimal Algorithm for Product Structure in Planar Graphs
|
14 pages, 5 figures
| null | null | null |
cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The \emph{Product Structure Theorem} for planar graphs (Dujmovi\'c et al.\
\emph{JACM}, \textbf{67}(4):22) states that any planar graph is contained in
the strong product of a planar $3$-tree, a path, and a $3$-cycle. We give a
simple linear-time algorithm for finding this decomposition as well as several
related decompositions. This improves on the previous $O(n\log n)$ time
algorithm (Morin.\ \emph{Algorithmica}, \textbf{85}(5):1544--1558).
|
[
{
"created": "Thu, 17 Feb 2022 19:13:44 GMT",
"version": "v1"
}
] |
2022-02-21
|
[
[
"Bose",
"Prosenjit",
""
],
[
"Morin",
"Pat",
""
],
[
"Odak",
"Saeed",
""
]
] |
The \emph{Product Structure Theorem} for planar graphs (Dujmovi\'c et al.\ \emph{JACM}, \textbf{67}(4):22) states that any planar graph is contained in the strong product of a planar $3$-tree, a path, and a $3$-cycle. We give a simple linear-time algorithm for finding this decomposition as well as several related decompositions. This improves on the previous $O(n\log n)$ time algorithm (Morin.\ \emph{Algorithmica}, \textbf{85}(5):1544--1558).
|
2105.01913
|
Samin Aref
|
Samin Aref and Zachary P. Neal
|
Identifying hidden coalitions in the US House of Representatives by
optimally partitioning signed networks based on generalized balance
|
Post-peer-review version, 23 pages, 7 figures, 2 tables, combined
article and supplementary information
| null |
10.1038/s41598-021-98139-w
| null |
cs.SI cs.CY math.OC physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In network science, identifying optimal partitions of a signed network into
internally cohesive and mutually divisive clusters based on generalized balance
theory is computationally challenging. We reformulate and generalize two binary
linear programming models that tackle this challenge, demonstrating their
practicality by applying them them to partition networks of collaboration in
the US House of Representatives. These models guarantee a globally optimal
network partition and can be practically applied to signed networks containing
up to 30,000 edges. In the US House context, we find that a three-cluster
partition is better than a conventional two-cluster partition, where the
otherwise hidden third coalition is composed of highly effective legislators
who are ideologically aligned with the majority party.
|
[
{
"created": "Wed, 5 May 2021 07:57:41 GMT",
"version": "v1"
},
{
"created": "Thu, 6 May 2021 05:17:10 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Sep 2021 22:17:22 GMT",
"version": "v3"
},
{
"created": "Wed, 27 Jul 2022 00:54:13 GMT",
"version": "v4"
}
] |
2022-07-28
|
[
[
"Aref",
"Samin",
""
],
[
"Neal",
"Zachary P.",
""
]
] |
In network science, identifying optimal partitions of a signed network into internally cohesive and mutually divisive clusters based on generalized balance theory is computationally challenging. We reformulate and generalize two binary linear programming models that tackle this challenge, demonstrating their practicality by applying them them to partition networks of collaboration in the US House of Representatives. These models guarantee a globally optimal network partition and can be practically applied to signed networks containing up to 30,000 edges. In the US House context, we find that a three-cluster partition is better than a conventional two-cluster partition, where the otherwise hidden third coalition is composed of highly effective legislators who are ideologically aligned with the majority party.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.