id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2407.11735
|
Erik Wallin
|
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand
|
ProSub: Probabilistic Open-Set Semi-Supervised Learning with
Subspace-Based Out-of-Distribution Detection
|
ECCV2024
| null | null | null |
cs.LG cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In open-set semi-supervised learning (OSSL), we consider unlabeled datasets
that may contain unknown classes. Existing OSSL methods often use the softmax
confidence for classifying data as in-distribution (ID) or out-of-distribution
(OOD). Additionally, many works for OSSL rely on ad-hoc thresholds for ID/OOD
classification, without considering the statistics of the problem. We propose a
new score for ID/OOD classification based on angles in feature space between
data and an ID subspace. Moreover, we propose an approach to estimate the
conditional distributions of scores given ID or OOD data, enabling
probabilistic predictions of data being ID or OOD. These components are put
together in a framework for OSSL, termed \emph{ProSub}, that is experimentally
shown to reach SOTA performance on several benchmark problems. Our code is
available at https://github.com/walline/prosub.
|
[
{
"created": "Tue, 16 Jul 2024 14:05:16 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Wallin",
"Erik",
""
],
[
"Svensson",
"Lennart",
""
],
[
"Kahl",
"Fredrik",
""
],
[
"Hammarstrand",
"Lars",
""
]
] |
In open-set semi-supervised learning (OSSL), we consider unlabeled datasets that may contain unknown classes. Existing OSSL methods often use the softmax confidence for classifying data as in-distribution (ID) or out-of-distribution (OOD). Additionally, many works for OSSL rely on ad-hoc thresholds for ID/OOD classification, without considering the statistics of the problem. We propose a new score for ID/OOD classification based on angles in feature space between data and an ID subspace. Moreover, we propose an approach to estimate the conditional distributions of scores given ID or OOD data, enabling probabilistic predictions of data being ID or OOD. These components are put together in a framework for OSSL, termed \emph{ProSub}, that is experimentally shown to reach SOTA performance on several benchmark problems. Our code is available at https://github.com/walline/prosub.
|
1606.07767
|
Dimitri Nowicki
|
Artem Chernodub and Dimitri Nowicki
|
Sampling-based Gradient Regularization for Capturing Long-Term
Dependencies in Recurrent Neural Networks
| null | null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vanishing (and exploding) gradients effect is a common problem for recurrent
neural networks with nonlinear activation functions which use backpropagation
method for calculation of derivatives. Deep feedforward neural networks with
many hidden layers also suffer from this effect. In this paper we propose a
novel universal technique that makes the norm of the gradient stay in the
suitable range. We construct a way to estimate a contribution of each training
example to the norm of the long-term components of the target function s
gradient. Using this subroutine we can construct mini-batches for the
stochastic gradient descent (SGD) training that leads to high performance and
accuracy of the trained network even for very complex tasks. We provide a
straightforward mathematical estimation of minibatch s impact on for the
gradient norm and prove its correctness theoretically. To check our framework
experimentally we use some special synthetic benchmarks for testing RNNs on
ability to capture long-term dependencies. Our network can detect links between
events in the (temporal) sequence at the range approx. 100 and longer.
|
[
{
"created": "Fri, 24 Jun 2016 17:31:02 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Jan 2017 21:30:29 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Feb 2017 21:25:26 GMT",
"version": "v3"
}
] |
2017-02-15
|
[
[
"Chernodub",
"Artem",
""
],
[
"Nowicki",
"Dimitri",
""
]
] |
Vanishing (and exploding) gradients effect is a common problem for recurrent neural networks with nonlinear activation functions which use backpropagation method for calculation of derivatives. Deep feedforward neural networks with many hidden layers also suffer from this effect. In this paper we propose a novel universal technique that makes the norm of the gradient stay in the suitable range. We construct a way to estimate a contribution of each training example to the norm of the long-term components of the target function s gradient. Using this subroutine we can construct mini-batches for the stochastic gradient descent (SGD) training that leads to high performance and accuracy of the trained network even for very complex tasks. We provide a straightforward mathematical estimation of minibatch s impact on for the gradient norm and prove its correctness theoretically. To check our framework experimentally we use some special synthetic benchmarks for testing RNNs on ability to capture long-term dependencies. Our network can detect links between events in the (temporal) sequence at the range approx. 100 and longer.
|
2403.19768
|
Kevin Barkevich
|
Kevin Barkevich, Reynold Bailey and Gabriel J. Diaz
|
Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and
Precision in Virtual Reality
|
16 pages, 10 figures, accepted to ETRA 2024 Full Papers
| null |
10.1145/3654705
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Algorithms for the estimation of gaze direction from mobile and video-based
eye trackers typically involve tracking a feature of the eye that moves through
the eye camera image in a way that covaries with the shifting gaze direction,
such as the center or boundaries of the pupil. Tracking these features using
traditional computer vision techniques can be difficult due to partial
occlusion and environmental reflections. Although recent efforts to use machine
learning (ML) for pupil tracking have demonstrated superior results when
evaluated using standard measures of segmentation performance, little is known
of how these networks may affect the quality of the final gaze estimate. This
work provides an objective assessment of the impact of several contemporary
ML-based methods for eye feature tracking when the subsequent gaze estimate is
produced using either feature-based or model-based methods. Metrics include the
accuracy and precision of the gaze estimate, as well as drop-out rate.
|
[
{
"created": "Thu, 28 Mar 2024 18:43:25 GMT",
"version": "v1"
}
] |
2024-04-01
|
[
[
"Barkevich",
"Kevin",
""
],
[
"Bailey",
"Reynold",
""
],
[
"Diaz",
"Gabriel J.",
""
]
] |
Algorithms for the estimation of gaze direction from mobile and video-based eye trackers typically involve tracking a feature of the eye that moves through the eye camera image in a way that covaries with the shifting gaze direction, such as the center or boundaries of the pupil. Tracking these features using traditional computer vision techniques can be difficult due to partial occlusion and environmental reflections. Although recent efforts to use machine learning (ML) for pupil tracking have demonstrated superior results when evaluated using standard measures of segmentation performance, little is known of how these networks may affect the quality of the final gaze estimate. This work provides an objective assessment of the impact of several contemporary ML-based methods for eye feature tracking when the subsequent gaze estimate is produced using either feature-based or model-based methods. Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.
|
2203.12566
|
Alex Shafarenko
|
Alex Shafarenko
|
Winternitz stack protocols
|
33 pages 4 figures. This updated version has a corrected stat
analysis in Section 2.2 and some typos fixed. Also the abstract has been
updated and the related work extended insignificantly
|
Cybersecurity (2024) 7:34
|
10.1186/s42400-024-00225-9
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper proposes and evaluates a new bipartite post-quantum digital
signature protocol based on Winternitz chains and the HORS oracle. Mutually
mistrustful Alice and Bob are able to agree and sign a series of documents in a
way that makes it impossible (within the assumed security model) to repudiate
their signatures. The number of signatures supported by a single public key is
limited by a large number but the security of the signature scheme is not
diminished by repeated application. A single public key supports both parties.
Some ramifications are discussed, security parameters evaluated and an
application area delineated for the proposed concept.
|
[
{
"created": "Wed, 23 Mar 2022 17:26:46 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Apr 2022 17:48:46 GMT",
"version": "v2"
}
] |
2024-04-05
|
[
[
"Shafarenko",
"Alex",
""
]
] |
This paper proposes and evaluates a new bipartite post-quantum digital signature protocol based on Winternitz chains and the HORS oracle. Mutually mistrustful Alice and Bob are able to agree and sign a series of documents in a way that makes it impossible (within the assumed security model) to repudiate their signatures. The number of signatures supported by a single public key is limited by a large number but the security of the signature scheme is not diminished by repeated application. A single public key supports both parties. Some ramifications are discussed, security parameters evaluated and an application area delineated for the proposed concept.
|
2210.07414
|
Hamed Nilforoshan
|
Hamed Nilforoshan, Wenli Looi, Emma Pierson, Blanca Villanueva, Nic
Fishman, Yiling Chen, John Sholar, Beth Redbird, David Grusky, Jure Leskovec
|
Human mobility networks reveal increased segregation in large cities
| null | null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
A long-standing expectation is that large, dense, and cosmopolitan areas
support socioeconomic mixing and exposure between diverse individuals. It has
been difficult to assess this hypothesis because past approaches to measuring
socioeconomic mixing have relied on static residential housing data rather than
real-life exposures between people at work, in places of leisure, and in home
neighborhoods. Here we develop a new measure of exposure segregation (ES) that
captures the socioeconomic diversity of everyday encounters. Leveraging cell
phone mobility data to represent 1.6 billion exposures among 9.6 million people
in the United States, we measure exposure segregation across 382 Metropolitan
Statistical Areas (MSAs) and 2829 counties. We discover that exposure
segregation is 67% higher in the 10 largest Metropolitan Statistical Areas
(MSAs) than in small MSAs with fewer than 100,000 residents. This means that,
contrary to expectation, residents of large cosmopolitan areas have
significantly less exposure to diverse individuals. Second, we find evidence
that large cities offer a greater choice of differentiated spaces targeted to
specific socioeconomic groups, a dynamic that accounts for this increase in
everyday socioeconomic segregation. Third, we discover that this
segregation-increasing effect is countered when a city's hubs (e.g. shopping
malls) are positioned to bridge diverse neighborhoods and thus attract people
of all socioeconomic statuses. Overall, our findings challenge a long-standing
conjecture in human geography and urban design, and highlight how built
environment can both prevent and facilitate exposure between diverse
individuals.
|
[
{
"created": "Thu, 13 Oct 2022 23:31:33 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jul 2023 02:37:49 GMT",
"version": "v2"
}
] |
2023-07-26
|
[
[
"Nilforoshan",
"Hamed",
""
],
[
"Looi",
"Wenli",
""
],
[
"Pierson",
"Emma",
""
],
[
"Villanueva",
"Blanca",
""
],
[
"Fishman",
"Nic",
""
],
[
"Chen",
"Yiling",
""
],
[
"Sholar",
"John",
""
],
[
"Redbird",
"Beth",
""
],
[
"Grusky",
"David",
""
],
[
"Leskovec",
"Jure",
""
]
] |
A long-standing expectation is that large, dense, and cosmopolitan areas support socioeconomic mixing and exposure between diverse individuals. It has been difficult to assess this hypothesis because past approaches to measuring socioeconomic mixing have relied on static residential housing data rather than real-life exposures between people at work, in places of leisure, and in home neighborhoods. Here we develop a new measure of exposure segregation (ES) that captures the socioeconomic diversity of everyday encounters. Leveraging cell phone mobility data to represent 1.6 billion exposures among 9.6 million people in the United States, we measure exposure segregation across 382 Metropolitan Statistical Areas (MSAs) and 2829 counties. We discover that exposure segregation is 67% higher in the 10 largest Metropolitan Statistical Areas (MSAs) than in small MSAs with fewer than 100,000 residents. This means that, contrary to expectation, residents of large cosmopolitan areas have significantly less exposure to diverse individuals. Second, we find evidence that large cities offer a greater choice of differentiated spaces targeted to specific socioeconomic groups, a dynamic that accounts for this increase in everyday socioeconomic segregation. Third, we discover that this segregation-increasing effect is countered when a city's hubs (e.g. shopping malls) are positioned to bridge diverse neighborhoods and thus attract people of all socioeconomic statuses. Overall, our findings challenge a long-standing conjecture in human geography and urban design, and highlight how built environment can both prevent and facilitate exposure between diverse individuals.
|
2010.02778
|
Weichao Lan
|
Weichao Lan, Liang Lan
|
Compressing Deep Convolutional Neural Networks by Stacking
Low-dimensional Binary Convolution Filters
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Convolutional Neural Networks (CNN) have been successfully applied to
many real-life problems. However, the huge memory cost of deep CNN models poses
a great challenge of deploying them on memory-constrained devices (e.g., mobile
phones). One popular way to reduce the memory cost of deep CNN model is to
train binary CNN where the weights in convolution filters are either 1 or -1
and therefore each weight can be efficiently stored using a single bit.
However, the compression ratio of existing binary CNN models is upper bounded
by around 32. To address this limitation, we propose a novel method to compress
deep CNN model by stacking low-dimensional binary convolution filters. Our
proposed method approximates a standard convolution filter by selecting and
stacking filters from a set of low-dimensional binary convolution filters. This
set of low-dimensional binary convolution filters is shared across all filters
for a given convolution layer. Therefore, our method will achieve much larger
compression ratio than binary CNN models. In order to train our proposed model,
we have theoretically shown that our proposed model is equivalent to select and
stack intermediate feature maps generated by low-dimensional binary filters.
Therefore, our proposed model can be efficiently trained using the
split-transform-merge strategy. We also provide detailed analysis of the memory
and computation cost of our model in model inference. We compared the proposed
method with other five popular model compression techniques on two benchmark
datasets. Our experimental results have demonstrated that our proposed method
achieves much higher compression ratio than existing methods while maintains
comparable accuracy.
|
[
{
"created": "Tue, 6 Oct 2020 14:49:22 GMT",
"version": "v1"
}
] |
2020-10-07
|
[
[
"Lan",
"Weichao",
""
],
[
"Lan",
"Liang",
""
]
] |
Deep Convolutional Neural Networks (CNN) have been successfully applied to many real-life problems. However, the huge memory cost of deep CNN models poses a great challenge of deploying them on memory-constrained devices (e.g., mobile phones). One popular way to reduce the memory cost of deep CNN model is to train binary CNN where the weights in convolution filters are either 1 or -1 and therefore each weight can be efficiently stored using a single bit. However, the compression ratio of existing binary CNN models is upper bounded by around 32. To address this limitation, we propose a novel method to compress deep CNN model by stacking low-dimensional binary convolution filters. Our proposed method approximates a standard convolution filter by selecting and stacking filters from a set of low-dimensional binary convolution filters. This set of low-dimensional binary convolution filters is shared across all filters for a given convolution layer. Therefore, our method will achieve much larger compression ratio than binary CNN models. In order to train our proposed model, we have theoretically shown that our proposed model is equivalent to select and stack intermediate feature maps generated by low-dimensional binary filters. Therefore, our proposed model can be efficiently trained using the split-transform-merge strategy. We also provide detailed analysis of the memory and computation cost of our model in model inference. We compared the proposed method with other five popular model compression techniques on two benchmark datasets. Our experimental results have demonstrated that our proposed method achieves much higher compression ratio than existing methods while maintains comparable accuracy.
|
1908.09480
|
EPTCS
|
Mathias Fleury (Max Planck Institut for Informatics), Hans-J\"org
Schurr (University of Lorraine, CNRS, Inria, and LORIA)
|
Reconstructing veriT Proofs in Isabelle/HOL
|
In Proceedings PxTP 2019, arXiv:1908.08639
|
EPTCS 301, 2019, pp. 36-50
|
10.4204/EPTCS.301.6
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated theorem provers are now commonly used within interactive theorem
provers to discharge an increasingly large number of proof obligations. To
maintain the trustworthiness of a proof, the automatically found proof must be
verified inside the proof assistant. We present here a reconstruction procedure
in the proof assistant Isabelle/HOL for proofs generated by the satisfiability
modulo theories solver veriT which is part of the smt tactic. We describe in
detail the architecture of our improved reconstruction method and the
challenges we faced in designing it. Our experiments show that the
veriT-powered smt tactic is regularly suggested by Sledgehammer as the fastest
method to automatically solve proof goals.
|
[
{
"created": "Mon, 26 Aug 2019 05:39:28 GMT",
"version": "v1"
}
] |
2019-08-27
|
[
[
"Fleury",
"Mathias",
"",
"Max Planck Institut for Informatics"
],
[
"Schurr",
"Hans-Jörg",
"",
"University of Lorraine, CNRS, Inria, and LORIA"
]
] |
Automated theorem provers are now commonly used within interactive theorem provers to discharge an increasingly large number of proof obligations. To maintain the trustworthiness of a proof, the automatically found proof must be verified inside the proof assistant. We present here a reconstruction procedure in the proof assistant Isabelle/HOL for proofs generated by the satisfiability modulo theories solver veriT which is part of the smt tactic. We describe in detail the architecture of our improved reconstruction method and the challenges we faced in designing it. Our experiments show that the veriT-powered smt tactic is regularly suggested by Sledgehammer as the fastest method to automatically solve proof goals.
|
2007.07138
|
Sabah Al-Fedaghi Dr.
|
Sabah Al-Fedaghi
|
Modeling the Semantics of States and State Machines
|
15 pages, 17 figures
|
Journal of Computer Science 2020, 16 (7): 891-905
|
10.3844/jcssp.2020.891.905
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A system s behavior is typically specified through models such as state
diagrams that describe how the system should behave. According to researchers,
it is not clear what a state actually represents regarding the system to be
modeled. Standards do not provide adequate definitions of or sufficient
guidance on the use of states. Studies show these inconsistencies can lead to
poor or incomplete specifications, which in turn could result in project delays
or increase the cost of the system design. This paper aims to establish a
precise definition of the notion of states and state machines, a goal motivated
by system modelers (e.g., requirement engineers) need to understand key
concepts and vocabulary such as states and state machine, which are major
behavioral modeling tools (e.g., in UML). State is the main notion of a state
machine in which events drive state changes. This raises questions about the
nature of these state-related notations. The semantics of these concepts is
based on a new modeling methodology called the thinging machine applied to a
number of examples of existing models. The thinging machine semantics is
founded on five elementary actions that divide the static model into
changes/states upon which events are defined.
|
[
{
"created": "Tue, 14 Jul 2020 15:57:07 GMT",
"version": "v1"
}
] |
2020-07-15
|
[
[
"Al-Fedaghi",
"Sabah",
""
]
] |
A system s behavior is typically specified through models such as state diagrams that describe how the system should behave. According to researchers, it is not clear what a state actually represents regarding the system to be modeled. Standards do not provide adequate definitions of or sufficient guidance on the use of states. Studies show these inconsistencies can lead to poor or incomplete specifications, which in turn could result in project delays or increase the cost of the system design. This paper aims to establish a precise definition of the notion of states and state machines, a goal motivated by system modelers (e.g., requirement engineers) need to understand key concepts and vocabulary such as states and state machine, which are major behavioral modeling tools (e.g., in UML). State is the main notion of a state machine in which events drive state changes. This raises questions about the nature of these state-related notations. The semantics of these concepts is based on a new modeling methodology called the thinging machine applied to a number of examples of existing models. The thinging machine semantics is founded on five elementary actions that divide the static model into changes/states upon which events are defined.
|
2303.10959
|
Nicky Zimmerman
|
Nicky Zimmerman and Matteo Sodano and Elias Marks and Jens Behley and
Cyrill Stachniss
|
Constructing Metric-Semantic Maps using Floor Plan Priors for Long-Term
Indoor Localization
|
7 pages, accepted to IROS 2023
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object-based maps are relevant for scene understanding since they integrate
geometric and semantic information of the environment, allowing autonomous
robots to robustly localize and interact with on objects. In this paper, we
address the task of constructing a metric-semantic map for the purpose of
long-term object-based localization. We exploit 3D object detections from
monocular RGB frames for both, the object-based map construction, and for
globally localizing in the constructed map. To tailor the approach to a target
environment, we propose an efficient way of generating 3D annotations to
finetune the 3D object detection model. We evaluate our map construction in an
office building, and test our long-term localization approach on challenging
sequences recorded in the same environment over nine months. The experiments
suggest that our approach is suitable for constructing metric-semantic maps,
and that our localization approach is robust to long-term changes. Both, the
mapping algorithm and the localization pipeline can run online on an onboard
computer. We release an open-source C++/ROS implementation of our approach.
|
[
{
"created": "Mon, 20 Mar 2023 09:33:05 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Oct 2023 15:56:51 GMT",
"version": "v2"
}
] |
2023-10-16
|
[
[
"Zimmerman",
"Nicky",
""
],
[
"Sodano",
"Matteo",
""
],
[
"Marks",
"Elias",
""
],
[
"Behley",
"Jens",
""
],
[
"Stachniss",
"Cyrill",
""
]
] |
Object-based maps are relevant for scene understanding since they integrate geometric and semantic information of the environment, allowing autonomous robots to robustly localize and interact with on objects. In this paper, we address the task of constructing a metric-semantic map for the purpose of long-term object-based localization. We exploit 3D object detections from monocular RGB frames for both, the object-based map construction, and for globally localizing in the constructed map. To tailor the approach to a target environment, we propose an efficient way of generating 3D annotations to finetune the 3D object detection model. We evaluate our map construction in an office building, and test our long-term localization approach on challenging sequences recorded in the same environment over nine months. The experiments suggest that our approach is suitable for constructing metric-semantic maps, and that our localization approach is robust to long-term changes. Both, the mapping algorithm and the localization pipeline can run online on an onboard computer. We release an open-source C++/ROS implementation of our approach.
|
1810.07075
|
Shaofeng Yuan
|
Yujiao Tang, Feng Yang, Shaofeng Yuan, Chang'an Zhan
|
A Multi-stage Framework with Context Information Fusion Structure for
Skin Lesion Segmentation
|
4 pages, 3 figures, 1 table
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The computer-aided diagnosis (CAD) systems can highly improve the reliability
and efficiency of melanoma recognition. As a crucial step of CAD, skin lesion
segmentation has the unsatisfactory accuracy in existing methods due to large
variability in lesion appearance and artifacts. In this work, we propose a
framework employing multi-stage UNets (MS-UNet) in the auto-context scheme to
segment skin lesion accurately end-to-end. We apply two approaches to boost the
performance of MS-UNet. First, UNet is coupled with a context information
fusion structure (CIFS) to integrate the low-level and context information in
the multi-scale feature space. Second, to alleviate the gradient vanishing
problem, we use deep supervision mechanism through supervising MS-UNet by
minimizing a weighted Jaccard distance loss function. Four out of five commonly
used performance metrics, including Jaccard index and Dice coefficient, show
that our approach outperforms the state-ofthe-art deep learning based methods
on the ISBI 2016 Skin Lesion Challenge dataset.
|
[
{
"created": "Tue, 16 Oct 2018 15:26:30 GMT",
"version": "v1"
}
] |
2018-10-17
|
[
[
"Tang",
"Yujiao",
""
],
[
"Yang",
"Feng",
""
],
[
"Yuan",
"Shaofeng",
""
],
[
"Zhan",
"Chang'an",
""
]
] |
The computer-aided diagnosis (CAD) systems can highly improve the reliability and efficiency of melanoma recognition. As a crucial step of CAD, skin lesion segmentation has the unsatisfactory accuracy in existing methods due to large variability in lesion appearance and artifacts. In this work, we propose a framework employing multi-stage UNets (MS-UNet) in the auto-context scheme to segment skin lesion accurately end-to-end. We apply two approaches to boost the performance of MS-UNet. First, UNet is coupled with a context information fusion structure (CIFS) to integrate the low-level and context information in the multi-scale feature space. Second, to alleviate the gradient vanishing problem, we use deep supervision mechanism through supervising MS-UNet by minimizing a weighted Jaccard distance loss function. Four out of five commonly used performance metrics, including Jaccard index and Dice coefficient, show that our approach outperforms the state-ofthe-art deep learning based methods on the ISBI 2016 Skin Lesion Challenge dataset.
|
2304.02312
|
Thibault Maho
|
Thibault Maho, Seyed-Mohsen Moosavi-Dezfooli, Teddy Furon
|
How to choose your best allies for a transferable attack?
|
ICCV 2023
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transferability of adversarial examples is a key issue in the security of
deep neural networks. The possibility of an adversarial example crafted for a
source model fooling another targeted model makes the threat of adversarial
attacks more realistic. Measuring transferability is a crucial problem, but the
Attack Success Rate alone does not provide a sound evaluation. This paper
proposes a new methodology for evaluating transferability by putting distortion
in a central position. This new tool shows that transferable attacks may
perform far worse than a black box attack if the attacker randomly picks the
source model. To address this issue, we propose a new selection mechanism,
called FiT, which aims at choosing the best source model with only a few
preliminary queries to the target. Our experimental results show that FiT is
highly effective at selecting the best source model for multiple scenarios such
as single-model attacks, ensemble-model attacks and multiple attacks (Code
available at: https://github.com/t-maho/transferability_measure_fit).
|
[
{
"created": "Wed, 5 Apr 2023 09:08:02 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jul 2023 14:36:20 GMT",
"version": "v2"
}
] |
2023-07-18
|
[
[
"Maho",
"Thibault",
""
],
[
"Moosavi-Dezfooli",
"Seyed-Mohsen",
""
],
[
"Furon",
"Teddy",
""
]
] |
The transferability of adversarial examples is a key issue in the security of deep neural networks. The possibility of an adversarial example crafted for a source model fooling another targeted model makes the threat of adversarial attacks more realistic. Measuring transferability is a crucial problem, but the Attack Success Rate alone does not provide a sound evaluation. This paper proposes a new methodology for evaluating transferability by putting distortion in a central position. This new tool shows that transferable attacks may perform far worse than a black box attack if the attacker randomly picks the source model. To address this issue, we propose a new selection mechanism, called FiT, which aims at choosing the best source model with only a few preliminary queries to the target. Our experimental results show that FiT is highly effective at selecting the best source model for multiple scenarios such as single-model attacks, ensemble-model attacks and multiple attacks (Code available at: https://github.com/t-maho/transferability_measure_fit).
|
1208.0688
|
Wei Xi
|
Jizhong Zhao, Wei Xi, Jinsong Han, Shaojie Tang, Xiangyang Li, Yunhao
Liu, Yihong Gong, Zehua Zhou
|
Efficient and Secure Key Extraction using CSI without Chasing down
Errors
|
Submitted to INFOCOM 2013
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating keys and keeping them secret is critical in secure communications.
Due to the "open-air" nature, key distribution is more susceptible to attacks
in wireless communications. An ingenious solution is to generate common secret
keys by two communicating parties separately without the need of key exchange
or distribution, and regenerate them on needs. Recently, it is promising to
extract keys by measuring the random variation in wireless channels, e.g., RSS.
In this paper, we propose an efficient Secret Key Extraction protocol without
Chasing down Errors, SKECE. It establishes common cryptographic keys for two
communicating parties in wireless networks via the realtime measurement of
Channel State Information (CSI). It outperforms RSS-based approaches for key
generation in terms of multiple subcarriers measurement, perfect symmetry in
channel, rapid decorrelation with distance, and high sensitivity towards
environments. In the SKECE design, we also propose effective mechanisms such as
the adaptive key stream generation, leakage resilient consistence validation,
and weighted key recombination, to fully exploit the excellent properties of
CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its
performance via extensive experiments. The results demonstrate that SKECE
achieves a more than 3x throughput gain in the key generation from one
subcarrier in static scenarios, and due to its high efficiency, a 50% reduction
on the communication overhead compared to the state-of-the-art RSS based
approaches.
|
[
{
"created": "Fri, 3 Aug 2012 08:35:43 GMT",
"version": "v1"
}
] |
2012-08-06
|
[
[
"Zhao",
"Jizhong",
""
],
[
"Xi",
"Wei",
""
],
[
"Han",
"Jinsong",
""
],
[
"Tang",
"Shaojie",
""
],
[
"Li",
"Xiangyang",
""
],
[
"Liu",
"Yunhao",
""
],
[
"Gong",
"Yihong",
""
],
[
"Zhou",
"Zehua",
""
]
] |
Generating keys and keeping them secret is critical in secure communications. Due to the "open-air" nature, key distribution is more susceptible to attacks in wireless communications. An ingenious solution is to generate common secret keys by two communicating parties separately without the need of key exchange or distribution, and regenerate them on needs. Recently, it is promising to extract keys by measuring the random variation in wireless channels, e.g., RSS. In this paper, we propose an efficient Secret Key Extraction protocol without Chasing down Errors, SKECE. It establishes common cryptographic keys for two communicating parties in wireless networks via the realtime measurement of Channel State Information (CSI). It outperforms RSS-based approaches for key generation in terms of multiple subcarriers measurement, perfect symmetry in channel, rapid decorrelation with distance, and high sensitivity towards environments. In the SKECE design, we also propose effective mechanisms such as the adaptive key stream generation, leakage resilient consistence validation, and weighted key recombination, to fully exploit the excellent properties of CSI. We implement SKECE on off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. The results demonstrate that SKECE achieves a more than 3x throughput gain in the key generation from one subcarrier in static scenarios, and due to its high efficiency, a 50% reduction on the communication overhead compared to the state-of-the-art RSS based approaches.
|
2211.06918
|
Ismael Perez
|
Ilkay Altintas, Ismael Perez, Dmitry Mishin, Adrien Trouillaud,
Christopher Irving, John Graham, Mahidhar Tatineni, Thomas DeFanti, Shawn
Strande, Larry Smarr, Michael L. Norman
|
Towards a Dynamic Composability Approach for using Heterogeneous Systems
in Remote Sensing
|
18th IEEE International Conference on eScience (2022)
| null | null | null |
cs.DC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Influenced by the advances in data and computing, the scientific practice
increasingly involves machine learning and artificial intelligence driven
methods which requires specialized capabilities at the system-, science- and
service-level in addition to the conventional large-capacity supercomputing
approaches. The latest distributed architectures built around the composability
of data-centric applications led to the emergence of a new ecosystem for
container coordination and integration. However, there is still a divide
between the application development pipelines of existing supercomputing
environments, and these new dynamic environments that disaggregate fluid
resource pools through accessible, portable and re-programmable interfaces. New
approaches for dynamic composability of heterogeneous systems are needed to
further advance the data-driven scientific practice for the purpose of more
efficient computing and usable tools for specific scientific domains. In this
paper, we present a novel approach for using composable systems in the
intersection between scientific computing, artificial intelligence (AI), and
remote sensing domain. We describe the architecture of a first working example
of a composable infrastructure that federates Expanse, an NSF-funded
supercomputer, with Nautilus, a Kubernetes-based GPU geo-distributed cluster.
We also summarize a case study in wildfire modeling, that demonstrates the
application of this new infrastructure in scientific workflows: a composed
system that bridges the insights from edge sensing, AI and computing
capabilities with a physics-driven simulation.
|
[
{
"created": "Sun, 13 Nov 2022 14:48:00 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Altintas",
"Ilkay",
""
],
[
"Perez",
"Ismael",
""
],
[
"Mishin",
"Dmitry",
""
],
[
"Trouillaud",
"Adrien",
""
],
[
"Irving",
"Christopher",
""
],
[
"Graham",
"John",
""
],
[
"Tatineni",
"Mahidhar",
""
],
[
"DeFanti",
"Thomas",
""
],
[
"Strande",
"Shawn",
""
],
[
"Smarr",
"Larry",
""
],
[
"Norman",
"Michael L.",
""
]
] |
Influenced by the advances in data and computing, the scientific practice increasingly involves machine learning and artificial intelligence driven methods which requires specialized capabilities at the system-, science- and service-level in addition to the conventional large-capacity supercomputing approaches. The latest distributed architectures built around the composability of data-centric applications led to the emergence of a new ecosystem for container coordination and integration. However, there is still a divide between the application development pipelines of existing supercomputing environments, and these new dynamic environments that disaggregate fluid resource pools through accessible, portable and re-programmable interfaces. New approaches for dynamic composability of heterogeneous systems are needed to further advance the data-driven scientific practice for the purpose of more efficient computing and usable tools for specific scientific domains. In this paper, we present a novel approach for using composable systems in the intersection between scientific computing, artificial intelligence (AI), and remote sensing domain. We describe the architecture of a first working example of a composable infrastructure that federates Expanse, an NSF-funded supercomputer, with Nautilus, a Kubernetes-based GPU geo-distributed cluster. We also summarize a case study in wildfire modeling, that demonstrates the application of this new infrastructure in scientific workflows: a composed system that bridges the insights from edge sensing, AI and computing capabilities with a physics-driven simulation.
|
1809.08211
|
Fulvio Mastrogiovanni
|
Wojciech Wasko, Alessandro Albini, Perla Maiolino, Fulvio
Mastrogiovanni, Giorgio Cannata
|
Contact modelling and tactile data processing for robot skin
|
Submitted to Robotics and Autonomous Systems
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tactile sensing is a key enabling technology to develop complex behaviours
for robots interacting with humans or the environment. This paper discusses
computational aspects playing a significant role when extracting information
about contact events. Considering a large-scale, capacitance-based robot skin
technology we developed in the past few years, we analyse the classical
Boussinesq-Cerruti's solution and the Love's approach for solving a distributed
inverse contact problem, both from a qualitative and a computational
perspective. Our contribution is the characterisation of algorithms performance
using a freely available dataset and data originating from surfaces provided
with robot skin.
|
[
{
"created": "Fri, 21 Sep 2018 17:05:34 GMT",
"version": "v1"
}
] |
2018-09-24
|
[
[
"Wasko",
"Wojciech",
""
],
[
"Albini",
"Alessandro",
""
],
[
"Maiolino",
"Perla",
""
],
[
"Mastrogiovanni",
"Fulvio",
""
],
[
"Cannata",
"Giorgio",
""
]
] |
Tactile sensing is a key enabling technology to develop complex behaviours for robots interacting with humans or the environment. This paper discusses computational aspects playing a significant role when extracting information about contact events. Considering a large-scale, capacitance-based robot skin technology we developed in the past few years, we analyse the classical Boussinesq-Cerruti's solution and the Love's approach for solving a distributed inverse contact problem, both from a qualitative and a computational perspective. Our contribution is the characterisation of algorithms performance using a freely available dataset and data originating from surfaces provided with robot skin.
|
2109.01820
|
Chenjie Wang
|
Chenjie Wang, Chengyuan Li, Bin Luo, Wei Wang, Jun Liu
|
RiWNet: A moving object instance segmentation Network being Robust in
adverse Weather conditions
|
12 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmenting each moving object instance in a scene is essential for many
applications. But like many other computer vision tasks, this task performs
well in optimal weather, but then adverse weather tends to fail. To be robust
in weather conditions, the usual way is to train network in data of given
weather pattern or to fuse multiple sensors. We focus on a new possibility,
that is, to improve its resilience to weather interference through the
network's structural design. First, we propose a novel FPN structure called
RiWFPN with a progressive top-down interaction and attention refinement module.
RiWFPN can directly replace other FPN structures to improve the robustness of
the network in non-optimal weather conditions. Then we extend SOLOV2 to capture
temporal information in video to learn motion information, and propose a moving
object instance segmentation network with RiWFPN called RiWNet. Finally, in
order to verify the effect of moving instance segmentation in different weather
disturbances, we propose a VKTTI-moving dataset which is a moving instance
segmentation dataset based on the VKTTI dataset, taking into account different
weather scenes such as rain, fog, sunset, morning as well as overcast. The
experiment proves how RiWFPN improves the network's resilience to adverse
weather effects compared to other FPN structures. We compare RiWNet to several
other state-of-the-art methods in some challenging datasets, and RiWNet shows
better performance especially under adverse weather conditions.
|
[
{
"created": "Sat, 4 Sep 2021 08:55:36 GMT",
"version": "v1"
}
] |
2021-09-07
|
[
[
"Wang",
"Chenjie",
""
],
[
"Li",
"Chengyuan",
""
],
[
"Luo",
"Bin",
""
],
[
"Wang",
"Wei",
""
],
[
"Liu",
"Jun",
""
]
] |
Segmenting each moving object instance in a scene is essential for many applications. But like many other computer vision tasks, this task performs well in optimal weather, but then adverse weather tends to fail. To be robust in weather conditions, the usual way is to train network in data of given weather pattern or to fuse multiple sensors. We focus on a new possibility, that is, to improve its resilience to weather interference through the network's structural design. First, we propose a novel FPN structure called RiWFPN with a progressive top-down interaction and attention refinement module. RiWFPN can directly replace other FPN structures to improve the robustness of the network in non-optimal weather conditions. Then we extend SOLOV2 to capture temporal information in video to learn motion information, and propose a moving object instance segmentation network with RiWFPN called RiWNet. Finally, in order to verify the effect of moving instance segmentation in different weather disturbances, we propose a VKTTI-moving dataset which is a moving instance segmentation dataset based on the VKTTI dataset, taking into account different weather scenes such as rain, fog, sunset, morning as well as overcast. The experiment proves how RiWFPN improves the network's resilience to adverse weather effects compared to other FPN structures. We compare RiWNet to several other state-of-the-art methods in some challenging datasets, and RiWNet shows better performance especially under adverse weather conditions.
|
2211.07514
|
Anmol Agarwal
|
Anmol Agarwal, Jigar Gupta, Rahul Goel, Shyam Upadhyay, Pankaj Joshi,
Rengarajan Aravamudhan
|
CST5: Data Augmentation for Code-Switched Semantic Parsing
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Extending semantic parsers to code-switched input has been a challenging
problem, primarily due to a lack of supervised training data. In this work, we
introduce CST5, a new data augmentation technique that finetunes a T5 model
using a small seed set ($\approx$100 utterances) to generate code-switched
utterances from English utterances. We show that CST5 generates high quality
code-switched data, both intrinsically (per human evaluation) and extrinsically
by comparing baseline models which are trained without data augmentation to
models which are trained with augmented data. Empirically we observe that using
CST5, one can achieve the same semantic parsing performance by using up to 20x
less labeled data. To aid further research in this area, we are also releasing
(a) Hinglish-TOP, the largest human annotated code-switched semantic parsing
dataset to date, containing 10k human annotated Hindi-English (Hinglish)
code-switched utterances, and (b) Over 170K CST5 generated code-switched
utterances from the TOPv2 dataset. Human evaluation shows that both the human
annotated data as well as the CST5 generated data is of good quality.
|
[
{
"created": "Mon, 14 Nov 2022 16:45:30 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Agarwal",
"Anmol",
""
],
[
"Gupta",
"Jigar",
""
],
[
"Goel",
"Rahul",
""
],
[
"Upadhyay",
"Shyam",
""
],
[
"Joshi",
"Pankaj",
""
],
[
"Aravamudhan",
"Rengarajan",
""
]
] |
Extending semantic parsers to code-switched input has been a challenging problem, primarily due to a lack of supervised training data. In this work, we introduce CST5, a new data augmentation technique that finetunes a T5 model using a small seed set ($\approx$100 utterances) to generate code-switched utterances from English utterances. We show that CST5 generates high quality code-switched data, both intrinsically (per human evaluation) and extrinsically by comparing baseline models which are trained without data augmentation to models which are trained with augmented data. Empirically we observe that using CST5, one can achieve the same semantic parsing performance by using up to 20x less labeled data. To aid further research in this area, we are also releasing (a) Hinglish-TOP, the largest human annotated code-switched semantic parsing dataset to date, containing 10k human annotated Hindi-English (Hinglish) code-switched utterances, and (b) Over 170K CST5 generated code-switched utterances from the TOPv2 dataset. Human evaluation shows that both the human annotated data as well as the CST5 generated data is of good quality.
|
2207.11850
|
Yudong Han
|
Yudong Han, Liqiang Nie, Jianhua Yin, Jianlong Wu, Yan Yan
|
Visual Perturbation-aware Collaborative Learning for Overcoming the
Language Prior Problem
|
13 pages, 10 figures, submitted to IEEE Transactions on Pattern
Analysis and Machine Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several studies have recently pointed that existing Visual Question Answering
(VQA) models heavily suffer from the language prior problem, which refers to
capturing superficial statistical correlations between the question type and
the answer whereas ignoring the image contents. Numerous efforts have been
dedicated to strengthen the image dependency by creating the delicate models or
introducing the extra visual annotations. However, these methods cannot
sufficiently explore how the visual cues explicitly affect the learned answer
representation, which is vital for language reliance alleviation. Moreover,
they generally emphasize the class-level discrimination of the learned answer
representation, which overlooks the more fine-grained instance-level patterns
and demands further optimization. In this paper, we propose a novel
collaborative learning scheme from the viewpoint of visual perturbation
calibration, which can better investigate the fine-grained visual effects and
mitigate the language prior problem by learning the instance-level
characteristics. Specifically, we devise a visual controller to construct two
sorts of curated images with different perturbation extents, based on which the
collaborative learning of intra-instance invariance and inter-instance
discrimination is implemented by two well-designed discriminators. Besides, we
implement the information bottleneck modulator on latent space for further bias
alleviation and representation calibration. We impose our visual
perturbation-aware framework to three orthodox baselines and the experimental
results on two diagnostic VQA-CP benchmark datasets evidently demonstrate its
effectiveness. In addition, we also justify its robustness on the balanced VQA
benchmark.
|
[
{
"created": "Sun, 24 Jul 2022 23:50:52 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Han",
"Yudong",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Yin",
"Jianhua",
""
],
[
"Wu",
"Jianlong",
""
],
[
"Yan",
"Yan",
""
]
] |
Several studies have recently pointed that existing Visual Question Answering (VQA) models heavily suffer from the language prior problem, which refers to capturing superficial statistical correlations between the question type and the answer whereas ignoring the image contents. Numerous efforts have been dedicated to strengthen the image dependency by creating the delicate models or introducing the extra visual annotations. However, these methods cannot sufficiently explore how the visual cues explicitly affect the learned answer representation, which is vital for language reliance alleviation. Moreover, they generally emphasize the class-level discrimination of the learned answer representation, which overlooks the more fine-grained instance-level patterns and demands further optimization. In this paper, we propose a novel collaborative learning scheme from the viewpoint of visual perturbation calibration, which can better investigate the fine-grained visual effects and mitigate the language prior problem by learning the instance-level characteristics. Specifically, we devise a visual controller to construct two sorts of curated images with different perturbation extents, based on which the collaborative learning of intra-instance invariance and inter-instance discrimination is implemented by two well-designed discriminators. Besides, we implement the information bottleneck modulator on latent space for further bias alleviation and representation calibration. We impose our visual perturbation-aware framework to three orthodox baselines and the experimental results on two diagnostic VQA-CP benchmark datasets evidently demonstrate its effectiveness. In addition, we also justify its robustness on the balanced VQA benchmark.
|
2408.06685
|
Kim-Manuel Klein
|
Kim-Manuel Klein and Janina Reuter
|
Faster Lattice Basis Computation -- The Generalization of the Euclidean
Algorithm
|
20 pages. arXiv admin note: substantial text overlap with
arXiv:2311.15902
| null | null | null |
cs.DS cs.DM math.AG
|
http://creativecommons.org/licenses/by/4.0/
|
The Euclidean algorithm the oldest algorithms known to mankind. Given two
integral numbers $a_1$ and $a_2$, it computes the greatest common divisor (gcd)
of $a_1$ and $a_2$ in a very elegant way. From a lattice perspective, it
computes a basis of the sum of two one-dimensional lattices $a_1 \mathbb{Z}$
and $a_2 \mathbb{Z}$ as $\gcd(a_1,a_2) \mathbb{Z} = a_1 \mathbb{Z} + a_2
\mathbb{Z}$. In this paper, we show that the classical Euclidean algorithm can
be adapted in a very natural way to compute a basis of a general lattice $L
(A_1, \ldots , A_n)$ given vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ with
$n> \mathrm{rank}(a_1, \ldots ,a_d)$. Similar to the Euclidean algorithm, our
algorithm is very easy to describe and implement and can be written within 12
lines of pseudocode.
Our generalized version of the Euclidean algorithm allows for several degrees
of freedom in the pivoting process. Hence, in a second step, we show that this
freedom can be exploited to make the algorithm perform more efficiently. As our
main result, we obtain an algorithm to compute a lattice basis for given
vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ in time (counting bit operations)
$LS + \tilde O ((n-d)d^2 \cdot \log(||A||)$, where $LS$ is the time required to
obtain the exact fractional solution of a certain system of linear equalities.
The analysis of the running time of our algorithms relies on fundamental
statements on the fractionality of solutions of linear systems of equations.
So far, the fastest algorithm for lattice basis computation was due to
Storjohann and Labhan [SL96] having a running time of $\tilde O (nd^\omega\log
||A||)$. For current upper bounds of $LS$, our algorithm has a running time
improvement of a factor of at least $d^{0.12}$ over [SL96]. Our algorithm is
therefore the first general algorithmic improvement to this classical problem
in nearly 30 years.
|
[
{
"created": "Tue, 13 Aug 2024 07:24:53 GMT",
"version": "v1"
}
] |
2024-08-14
|
[
[
"Klein",
"Kim-Manuel",
""
],
[
"Reuter",
"Janina",
""
]
] |
The Euclidean algorithm the oldest algorithms known to mankind. Given two integral numbers $a_1$ and $a_2$, it computes the greatest common divisor (gcd) of $a_1$ and $a_2$ in a very elegant way. From a lattice perspective, it computes a basis of the sum of two one-dimensional lattices $a_1 \mathbb{Z}$ and $a_2 \mathbb{Z}$ as $\gcd(a_1,a_2) \mathbb{Z} = a_1 \mathbb{Z} + a_2 \mathbb{Z}$. In this paper, we show that the classical Euclidean algorithm can be adapted in a very natural way to compute a basis of a general lattice $L (A_1, \ldots , A_n)$ given vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ with $n> \mathrm{rank}(a_1, \ldots ,a_d)$. Similar to the Euclidean algorithm, our algorithm is very easy to describe and implement and can be written within 12 lines of pseudocode. Our generalized version of the Euclidean algorithm allows for several degrees of freedom in the pivoting process. Hence, in a second step, we show that this freedom can be exploited to make the algorithm perform more efficiently. As our main result, we obtain an algorithm to compute a lattice basis for given vectors $A_1, \ldots , A_n \in \mathbb{Z}^d$ in time (counting bit operations) $LS + \tilde O ((n-d)d^2 \cdot \log(||A||)$, where $LS$ is the time required to obtain the exact fractional solution of a certain system of linear equalities. The analysis of the running time of our algorithms relies on fundamental statements on the fractionality of solutions of linear systems of equations. So far, the fastest algorithm for lattice basis computation was due to Storjohann and Labhan [SL96] having a running time of $\tilde O (nd^\omega\log ||A||)$. For current upper bounds of $LS$, our algorithm has a running time improvement of a factor of at least $d^{0.12}$ over [SL96]. Our algorithm is therefore the first general algorithmic improvement to this classical problem in nearly 30 years.
|
1711.06855
|
Ahmet Cetinkaya
|
Ahmet Cetinkaya, Hideaki Ishii, Tomohisa Hayakawa
|
A Probabilistic Characterization of Random and Malicious Communication
Failures in Multi-Hop Networked Control
|
Correct typos in Sections 3-6. Make changes in the proofs of Theorems
3.4 and 4.3
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The control problem of a linear discrete-time dynamical system over a
multi-hop network is explored. The network is assumed to be subject to packet
drops by malicious and nonmalicious nodes as well as random and malicious data
corruption issues. We utilize asymptotic tail-probability bounds of
transmission failure ratios to characterize the links and paths of a network as
well as the network itself. This probabilistic characterization allows us to
take into account multiple failures that depend on each other, and coordinated
malicious attacks on the network. We obtain a sufficient condition for the
stability of the networked control system by utilizing our probabilistic
approach. We then demonstrate the efficacy of our results in different
scenarios concerning transmission failures on a multi-hop network.
|
[
{
"created": "Sat, 18 Nov 2017 12:35:23 GMT",
"version": "v1"
},
{
"created": "Wed, 30 May 2018 04:30:39 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Sep 2018 04:27:27 GMT",
"version": "v3"
}
] |
2018-09-24
|
[
[
"Cetinkaya",
"Ahmet",
""
],
[
"Ishii",
"Hideaki",
""
],
[
"Hayakawa",
"Tomohisa",
""
]
] |
The control problem of a linear discrete-time dynamical system over a multi-hop network is explored. The network is assumed to be subject to packet drops by malicious and nonmalicious nodes as well as random and malicious data corruption issues. We utilize asymptotic tail-probability bounds of transmission failure ratios to characterize the links and paths of a network as well as the network itself. This probabilistic characterization allows us to take into account multiple failures that depend on each other, and coordinated malicious attacks on the network. We obtain a sufficient condition for the stability of the networked control system by utilizing our probabilistic approach. We then demonstrate the efficacy of our results in different scenarios concerning transmission failures on a multi-hop network.
|
2309.00665
|
Iurii Medvedev
|
Iurii Medvedev, Joana Pimenta, Nuno Gon\c{c}alves
|
Fused Classification For Differential Face Morphing Detection
|
8 pages, 3 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face morphing, a sophisticated presentation attack technique, poses
significant security risks to face recognition systems. Traditional methods
struggle to detect morphing attacks, which involve blending multiple face
images to create a synthetic image that can match different individuals. In
this paper, we focus on the differential detection of face morphing and propose
an extended approach based on fused classification method for no-reference
scenario. We introduce a public face morphing detection benchmark for the
differential scenario and utilize a specific data mining technique to enhance
the performance of our approach. Experimental results demonstrate the
effectiveness of our method in detecting morphing attacks.
|
[
{
"created": "Fri, 1 Sep 2023 16:14:29 GMT",
"version": "v1"
}
] |
2023-09-06
|
[
[
"Medvedev",
"Iurii",
""
],
[
"Pimenta",
"Joana",
""
],
[
"Gonçalves",
"Nuno",
""
]
] |
Face morphing, a sophisticated presentation attack technique, poses significant security risks to face recognition systems. Traditional methods struggle to detect morphing attacks, which involve blending multiple face images to create a synthetic image that can match different individuals. In this paper, we focus on the differential detection of face morphing and propose an extended approach based on fused classification method for no-reference scenario. We introduce a public face morphing detection benchmark for the differential scenario and utilize a specific data mining technique to enhance the performance of our approach. Experimental results demonstrate the effectiveness of our method in detecting morphing attacks.
|
1708.02377
|
Chengxi Zang
|
Chengxi Zang, Peng Cui, Chaoming Song, Christos Faloutsos and Wenwu
Zhu
|
Structural patterns of information cascades and their implications for
dynamics and semantics
| null | null | null | null |
cs.SI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information cascades are ubiquitous in both physical society and online
social media, taking on large variations in structures, dynamics and semantics.
Although the dynamics and semantics of information cascades have been studied,
the structural patterns and their correlations with dynamics and semantics are
largely unknown. Here we explore a large-scale dataset including $432$ million
information cascades with explicit records of spreading traces, spreading
behaviors, information content as well as user profiles. We find that the
structural complexity of information cascades is far beyond the previous
conjectures. We first propose a ten-dimensional metric to quantify the
structural characteristics of information cascades, reflecting cascade size,
silhouette, direction and activity aspects. We find that bimodal law governs
majority of the metrics, information flows in cascades have four directions,
and the self-loop number and average activity of cascades follows power law. We
then analyze the high-order structural patterns of information cascades.
Finally, we evaluate to what extent the structural features of information
cascades can explain its dynamic patterns and semantics, and finally uncover
some notable implications of structural patterns in information cascades. Our
discoveries also provide a foundation for the microscopic mechanisms for
information spreading, potentially leading to implications for cascade
prediction and outlier detection.
|
[
{
"created": "Tue, 8 Aug 2017 05:42:46 GMT",
"version": "v1"
}
] |
2017-08-09
|
[
[
"Zang",
"Chengxi",
""
],
[
"Cui",
"Peng",
""
],
[
"Song",
"Chaoming",
""
],
[
"Faloutsos",
"Christos",
""
],
[
"Zhu",
"Wenwu",
""
]
] |
Information cascades are ubiquitous in both physical society and online social media, taking on large variations in structures, dynamics and semantics. Although the dynamics and semantics of information cascades have been studied, the structural patterns and their correlations with dynamics and semantics are largely unknown. Here we explore a large-scale dataset including $432$ million information cascades with explicit records of spreading traces, spreading behaviors, information content as well as user profiles. We find that the structural complexity of information cascades is far beyond the previous conjectures. We first propose a ten-dimensional metric to quantify the structural characteristics of information cascades, reflecting cascade size, silhouette, direction and activity aspects. We find that bimodal law governs majority of the metrics, information flows in cascades have four directions, and the self-loop number and average activity of cascades follows power law. We then analyze the high-order structural patterns of information cascades. Finally, we evaluate to what extent the structural features of information cascades can explain its dynamic patterns and semantics, and finally uncover some notable implications of structural patterns in information cascades. Our discoveries also provide a foundation for the microscopic mechanisms for information spreading, potentially leading to implications for cascade prediction and outlier detection.
|
2108.07506
|
Haitian Zeng
|
Haitian Zeng, Yuchao Dai, Xin Yu, Xiaohan Wang, Yi Yang
|
PR-RRN: Pairwise-Regularized Residual-Recursive Networks for Non-rigid
Structure-from-Motion
|
Accepted to ICCV 2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose PR-RRN, a novel neural-network based method for Non-rigid
Structure-from-Motion (NRSfM). PR-RRN consists of Residual-Recursive Networks
(RRN) and two extra regularization losses. RRN is designed to effectively
recover 3D shape and camera from 2D keypoints with novel residual-recursive
structure. As NRSfM is a highly under-constrained problem, we propose two new
pairwise regularization to further regularize the reconstruction. The
Rigidity-based Pairwise Contrastive Loss regularizes the shape representation
by encouraging higher similarity between the representations of high-rigidity
pairs of frames than low-rigidity pairs. We propose minimum singular-value
ratio to measure the pairwise rigidity. The Pairwise Consistency Loss enforces
the reconstruction to be consistent when the estimated shapes and cameras are
exchanged between pairs. Our approach achieves state-of-the-art performance on
CMU MOCAP and PASCAL3D+ dataset.
|
[
{
"created": "Tue, 17 Aug 2021 08:39:02 GMT",
"version": "v1"
}
] |
2021-08-18
|
[
[
"Zeng",
"Haitian",
""
],
[
"Dai",
"Yuchao",
""
],
[
"Yu",
"Xin",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Yang",
"Yi",
""
]
] |
We propose PR-RRN, a novel neural-network based method for Non-rigid Structure-from-Motion (NRSfM). PR-RRN consists of Residual-Recursive Networks (RRN) and two extra regularization losses. RRN is designed to effectively recover 3D shape and camera from 2D keypoints with novel residual-recursive structure. As NRSfM is a highly under-constrained problem, we propose two new pairwise regularization to further regularize the reconstruction. The Rigidity-based Pairwise Contrastive Loss regularizes the shape representation by encouraging higher similarity between the representations of high-rigidity pairs of frames than low-rigidity pairs. We propose minimum singular-value ratio to measure the pairwise rigidity. The Pairwise Consistency Loss enforces the reconstruction to be consistent when the estimated shapes and cameras are exchanged between pairs. Our approach achieves state-of-the-art performance on CMU MOCAP and PASCAL3D+ dataset.
|
2307.00362
|
Emmanuel Sam
|
Emmanuel Sam and Benjamin Bergougnoux and Petr A. Golovach and Nello
Blaser
|
Kernelization for Finding Lineal Topologies (Depth-First Spanning Trees)
with Many or Few Leaves
|
16 pages, accepted for presentation at FCT 2023
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a given graph $G$, a depth-first search (DFS) tree $T$ of $G$ is an
$r$-rooted spanning tree such that every edge of $G$ is either an edge of $T$
or is between a \textit{descendant} and an \textit{ancestor} in $T$. A graph
$G$ together with a DFS tree is called a \textit{lineal topology} $\mathcal{T}
= (G, r, T)$. Sam et al. (2023) initiated study of the parameterized complexity
of the \textsc{Min-LLT} and \textsc{Max-LLT} problems which ask, given a graph
$G$ and an integer $k\geq 0$, whether $G$ has a DFS tree with at most $k$ and
at least $k$ leaves, respectively. Particularly, they showed that for the dual
parameterization, where the tasks are to find DFS trees with at least $n-k$ and
at most $n-k$ leaves, respectively, these problems are fixed-parameter
tractable when parameterized by $k$. However, the proofs were based on
Courcelle's theorem, thereby making the running times a tower of exponentials.
We prove that both problems admit polynomial kernels with $\Oh(k^3)$ vertices.
In particular, this implies FPT algorithms running in $k^{\Oh(k)}\cdot
n^{O(1)}$ time. We achieve these results by making use of a $\Oh(k)$-sized
vertex cover structure associated with each problem. This also allows us to
demonstrate polynomial kernels for \textsc{Min-LLT} and \textsc{Max-LLT} for
the structural parameterization by the vertex cover number.
|
[
{
"created": "Sat, 1 Jul 2023 15:19:22 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2023 08:47:18 GMT",
"version": "v2"
}
] |
2023-07-21
|
[
[
"Sam",
"Emmanuel",
""
],
[
"Bergougnoux",
"Benjamin",
""
],
[
"Golovach",
"Petr A.",
""
],
[
"Blaser",
"Nello",
""
]
] |
For a given graph $G$, a depth-first search (DFS) tree $T$ of $G$ is an $r$-rooted spanning tree such that every edge of $G$ is either an edge of $T$ or is between a \textit{descendant} and an \textit{ancestor} in $T$. A graph $G$ together with a DFS tree is called a \textit{lineal topology} $\mathcal{T} = (G, r, T)$. Sam et al. (2023) initiated study of the parameterized complexity of the \textsc{Min-LLT} and \textsc{Max-LLT} problems which ask, given a graph $G$ and an integer $k\geq 0$, whether $G$ has a DFS tree with at most $k$ and at least $k$ leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least $n-k$ and at most $n-k$ leaves, respectively, these problems are fixed-parameter tractable when parameterized by $k$. However, the proofs were based on Courcelle's theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with $\Oh(k^3)$ vertices. In particular, this implies FPT algorithms running in $k^{\Oh(k)}\cdot n^{O(1)}$ time. We achieve these results by making use of a $\Oh(k)$-sized vertex cover structure associated with each problem. This also allows us to demonstrate polynomial kernels for \textsc{Min-LLT} and \textsc{Max-LLT} for the structural parameterization by the vertex cover number.
|
2401.03764
|
Rui Ma
|
Ruiqi Liu, Peng Zheng, Ye Wang, Rui Ma
|
3D-SSGAN: Lifting 2D Semantics for 3D-Aware Compositional Portrait
Synthesis
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing 3D-aware portrait synthesis methods can generate impressive
high-quality images while preserving strong 3D consistency. However, most of
them cannot support the fine-grained part-level control over synthesized
images. Conversely, some GAN-based 2D portrait synthesis methods can achieve
clear disentanglement of facial regions, but they cannot preserve view
consistency due to a lack of 3D modeling abilities. To address these issues, we
propose 3D-SSGAN, a novel framework for 3D-aware compositional portrait image
synthesis. First, a simple yet effective depth-guided 2D-to-3D lifting module
maps the generated 2D part features and semantics to 3D. Then, a volume
renderer with a novel 3D-aware semantic mask renderer is utilized to produce
the composed face features and corresponding masks. The whole framework is
trained end-to-end by discriminating between real and synthesized 2D images and
their semantic masks. Quantitative and qualitative evaluations demonstrate the
superiority of 3D-SSGAN in controllable part-level synthesis while preserving
3D view consistency.
|
[
{
"created": "Mon, 8 Jan 2024 09:41:07 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Liu",
"Ruiqi",
""
],
[
"Zheng",
"Peng",
""
],
[
"Wang",
"Ye",
""
],
[
"Ma",
"Rui",
""
]
] |
Existing 3D-aware portrait synthesis methods can generate impressive high-quality images while preserving strong 3D consistency. However, most of them cannot support the fine-grained part-level control over synthesized images. Conversely, some GAN-based 2D portrait synthesis methods can achieve clear disentanglement of facial regions, but they cannot preserve view consistency due to a lack of 3D modeling abilities. To address these issues, we propose 3D-SSGAN, a novel framework for 3D-aware compositional portrait image synthesis. First, a simple yet effective depth-guided 2D-to-3D lifting module maps the generated 2D part features and semantics to 3D. Then, a volume renderer with a novel 3D-aware semantic mask renderer is utilized to produce the composed face features and corresponding masks. The whole framework is trained end-to-end by discriminating between real and synthesized 2D images and their semantic masks. Quantitative and qualitative evaluations demonstrate the superiority of 3D-SSGAN in controllable part-level synthesis while preserving 3D view consistency.
|
2106.01904
|
Lorenzo Bertolini
|
Lorenzo Bertolini, Julie Weeds, David Weir, Qiwei Peng
|
Representing Syntax and Composition with Geometric Transformations
|
to appear in Findings of ACL 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exploitation of syntactic graphs (SyGs) as a word's context has been
shown to be beneficial for distributional semantic models (DSMs), both at the
level of individual word representations and in deriving phrasal
representations via composition. However, notwithstanding the potential
performance benefit, the syntactically-aware DSMs proposed to date have huge
numbers of parameters (compared to conventional DSMs) and suffer from data
sparsity. Furthermore, the encoding of the SyG links (i.e., the syntactic
relations) has been largely limited to linear maps. The knowledge graphs'
literature, on the other hand, has proposed light-weight models employing
different geometric transformations (GTs) to encode edges in a knowledge graph
(KG). Our work explores the possibility of adopting this family of models to
encode SyGs. Furthermore, we investigate which GT better encodes syntactic
relations, so that these representations can be used to enhance phrase-level
composition via syntactic contextualisation.
|
[
{
"created": "Thu, 3 Jun 2021 14:53:34 GMT",
"version": "v1"
}
] |
2021-06-04
|
[
[
"Bertolini",
"Lorenzo",
""
],
[
"Weeds",
"Julie",
""
],
[
"Weir",
"David",
""
],
[
"Peng",
"Qiwei",
""
]
] |
The exploitation of syntactic graphs (SyGs) as a word's context has been shown to be beneficial for distributional semantic models (DSMs), both at the level of individual word representations and in deriving phrasal representations via composition. However, notwithstanding the potential performance benefit, the syntactically-aware DSMs proposed to date have huge numbers of parameters (compared to conventional DSMs) and suffer from data sparsity. Furthermore, the encoding of the SyG links (i.e., the syntactic relations) has been largely limited to linear maps. The knowledge graphs' literature, on the other hand, has proposed light-weight models employing different geometric transformations (GTs) to encode edges in a knowledge graph (KG). Our work explores the possibility of adopting this family of models to encode SyGs. Furthermore, we investigate which GT better encodes syntactic relations, so that these representations can be used to enhance phrase-level composition via syntactic contextualisation.
|
2202.13985
|
Stuart Armstrong
|
Rebecca Gorman, Stuart Armstrong
|
The dangers in algorithms learning humans' values and irrationalities
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
For an artificial intelligence (AI) to be aligned with human values (or human
preferences), it must first learn those values. AI systems that are trained on
human behavior, risk miscategorising human irrationalities as human values --
and then optimising for these irrationalities. Simply learning human values
still carries risks: AI learning them will inevitably also gain information on
human irrationalities and human behaviour/policy. Both of these can be
dangerous: knowing human policy allows an AI to become generically more
powerful (whether it is partially aligned or not aligned at all), while
learning human irrationalities allows it to exploit humans without needing to
provide value in return. This paper analyses the danger in developing
artificial intelligence that learns about human irrationalities and human
policy, and constructs a model recommendation system with various levels of
information about human biases, human policy, and human values. It concludes
that, whatever the power and knowledge of the AI, it is more dangerous for it
to know human irrationalities than human values. Thus it is better for the AI
to learn human values directly, rather than learning human biases and then
deducing values from behaviour.
|
[
{
"created": "Mon, 28 Feb 2022 17:41:39 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Mar 2022 11:23:04 GMT",
"version": "v2"
}
] |
2022-03-02
|
[
[
"Gorman",
"Rebecca",
""
],
[
"Armstrong",
"Stuart",
""
]
] |
For an artificial intelligence (AI) to be aligned with human values (or human preferences), it must first learn those values. AI systems that are trained on human behavior, risk miscategorising human irrationalities as human values -- and then optimising for these irrationalities. Simply learning human values still carries risks: AI learning them will inevitably also gain information on human irrationalities and human behaviour/policy. Both of these can be dangerous: knowing human policy allows an AI to become generically more powerful (whether it is partially aligned or not aligned at all), while learning human irrationalities allows it to exploit humans without needing to provide value in return. This paper analyses the danger in developing artificial intelligence that learns about human irrationalities and human policy, and constructs a model recommendation system with various levels of information about human biases, human policy, and human values. It concludes that, whatever the power and knowledge of the AI, it is more dangerous for it to know human irrationalities than human values. Thus it is better for the AI to learn human values directly, rather than learning human biases and then deducing values from behaviour.
|
2109.14879
|
Grzegorz Chlebus
|
Grzegorz Chlebus and Andrea Schenk and Horst K. Hahn and Bram van
Ginneken and Hans Meine
|
Robust Segmentation Models using an Uncertainty Slice Sampling Based
Annotation Workflow
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic segmentation neural networks require pixel-level annotations in
large quantities to achieve a good performance. In the medical domain, such
annotations are expensive, because they are time-consuming and require expert
knowledge. Active learning optimizes the annotation effort by devising
strategies to select cases for labeling that are most informative to the model.
In this work, we propose an uncertainty slice sampling (USS) strategy for
semantic segmentation of 3D medical volumes that selects 2D image slices for
annotation and compare it with various other strategies. We demonstrate the
efficiency of USS on a CT liver segmentation task using multi-site data. After
five iterations, the training data resulting from USS consisted of 2410 slices
(4% of all slices in the data pool) compared to 8121 (13%), 8641 (14%), and
3730 (6%) for uncertainty volume (UVS), random volume (RVS), and random slice
(RSS) sampling, respectively. Despite being trained on the smallest amount of
data, the model based on the USS strategy evaluated on 234 test volumes
significantly outperformed models trained according to other strategies and
achieved a mean Dice index of 0.964, a relative volume error of 4.2%, a mean
surface distance of 1.35 mm, and a Hausdorff distance of 23.4 mm. This was only
slightly inferior to 0.967, 3.8%, 1.18 mm, and 22.9 mm achieved by a model
trained on all available data, but the robustness analysis using the 5th
percentile of Dice and the 95th percentile of the remaining metrics
demonstrated that USS resulted not only in the most robust model compared to
other sampling schemes, but also outperformed the model trained on all data
according to Dice (0.946 vs. 0.945) and mean surface distance (1.92 mm vs. 2.03
mm).
|
[
{
"created": "Thu, 30 Sep 2021 06:56:11 GMT",
"version": "v1"
}
] |
2021-10-01
|
[
[
"Chlebus",
"Grzegorz",
""
],
[
"Schenk",
"Andrea",
""
],
[
"Hahn",
"Horst K.",
""
],
[
"van Ginneken",
"Bram",
""
],
[
"Meine",
"Hans",
""
]
] |
Semantic segmentation neural networks require pixel-level annotations in large quantities to achieve a good performance. In the medical domain, such annotations are expensive, because they are time-consuming and require expert knowledge. Active learning optimizes the annotation effort by devising strategies to select cases for labeling that are most informative to the model. In this work, we propose an uncertainty slice sampling (USS) strategy for semantic segmentation of 3D medical volumes that selects 2D image slices for annotation and compare it with various other strategies. We demonstrate the efficiency of USS on a CT liver segmentation task using multi-site data. After five iterations, the training data resulting from USS consisted of 2410 slices (4% of all slices in the data pool) compared to 8121 (13%), 8641 (14%), and 3730 (6%) for uncertainty volume (UVS), random volume (RVS), and random slice (RSS) sampling, respectively. Despite being trained on the smallest amount of data, the model based on the USS strategy evaluated on 234 test volumes significantly outperformed models trained according to other strategies and achieved a mean Dice index of 0.964, a relative volume error of 4.2%, a mean surface distance of 1.35 mm, and a Hausdorff distance of 23.4 mm. This was only slightly inferior to 0.967, 3.8%, 1.18 mm, and 22.9 mm achieved by a model trained on all available data, but the robustness analysis using the 5th percentile of Dice and the 95th percentile of the remaining metrics demonstrated that USS resulted not only in the most robust model compared to other sampling schemes, but also outperformed the model trained on all data according to Dice (0.946 vs. 0.945) and mean surface distance (1.92 mm vs. 2.03 mm).
|
2402.10815
|
Noleen K\"ohler
|
Tesshu Hanaka, Noleen K\"ohler and Michael Lampis
|
Core Stability in Additively Separable Hedonic Games of Low Treewidth
| null | null | null | null |
cs.DS cs.CC cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Additively Separable Hedonic Game (ASHG) are coalition-formation games where
we are given a graph whose vertices represent $n$ selfish agents and the weight
of each edge $uv$ denotes how much agent $u$ gains (or loses) when she is
placed in the same coalition as agent $v$. We revisit the computational
complexity of the well-known notion of core stability of ASHGs, where the goal
is to construct a partition of the agents into coalitions such that no group of
agents would prefer to diverge from the given partition and form a new
(blocking) coalition. Since both finding a core stable partition and verifying
that a given partition is core stable are intractable problems
($\Sigma_2^p$-complete and coNP-complete respectively) we study their
complexity from the point of view of structural parameterized complexity, using
standard graph-theoretic parameters, such as treewidth.
|
[
{
"created": "Fri, 16 Feb 2024 16:39:14 GMT",
"version": "v1"
}
] |
2024-02-19
|
[
[
"Hanaka",
"Tesshu",
""
],
[
"Köhler",
"Noleen",
""
],
[
"Lampis",
"Michael",
""
]
] |
Additively Separable Hedonic Game (ASHG) are coalition-formation games where we are given a graph whose vertices represent $n$ selfish agents and the weight of each edge $uv$ denotes how much agent $u$ gains (or loses) when she is placed in the same coalition as agent $v$. We revisit the computational complexity of the well-known notion of core stability of ASHGs, where the goal is to construct a partition of the agents into coalitions such that no group of agents would prefer to diverge from the given partition and form a new (blocking) coalition. Since both finding a core stable partition and verifying that a given partition is core stable are intractable problems ($\Sigma_2^p$-complete and coNP-complete respectively) we study their complexity from the point of view of structural parameterized complexity, using standard graph-theoretic parameters, such as treewidth.
|
1709.05737
|
Dong Liu
|
Rui Song, Dong Liu, Houqiang Li, Feng Wu
|
Neural network-based arithmetic coding of intra prediction modes in HEVC
|
VCIP 2017
| null |
10.1109/VCIP.2017.8305104
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In both H.264 and HEVC, context-adaptive binary arithmetic coding (CABAC) is
adopted as the entropy coding method. CABAC relies on manually designed
binarization processes as well as handcrafted context models, which may
restrict the compression efficiency. In this paper, we propose an arithmetic
coding strategy by training neural networks, and make preliminary studies on
coding of the intra prediction modes in HEVC. Instead of binarization, we
propose to directly estimate the probability distribution of the 35 intra
prediction modes with the adoption of a multi-level arithmetic codec. Instead
of handcrafted context models, we utilize convolutional neural network (CNN) to
perform the probability estimation. Simulation results show that our proposed
arithmetic coding leads to as high as 9.9% bits saving compared with CABAC.
|
[
{
"created": "Mon, 18 Sep 2017 01:32:45 GMT",
"version": "v1"
}
] |
2018-03-30
|
[
[
"Song",
"Rui",
""
],
[
"Liu",
"Dong",
""
],
[
"Li",
"Houqiang",
""
],
[
"Wu",
"Feng",
""
]
] |
In both H.264 and HEVC, context-adaptive binary arithmetic coding (CABAC) is adopted as the entropy coding method. CABAC relies on manually designed binarization processes as well as handcrafted context models, which may restrict the compression efficiency. In this paper, we propose an arithmetic coding strategy by training neural networks, and make preliminary studies on coding of the intra prediction modes in HEVC. Instead of binarization, we propose to directly estimate the probability distribution of the 35 intra prediction modes with the adoption of a multi-level arithmetic codec. Instead of handcrafted context models, we utilize convolutional neural network (CNN) to perform the probability estimation. Simulation results show that our proposed arithmetic coding leads to as high as 9.9% bits saving compared with CABAC.
|
2203.08931
|
Reno Kriz
|
Anietie Andy and Siyi Liu and Daphne Ippolito and Reno Kriz and Chris
Callison-Burch and Derry Wijaya
|
Creating Multimedia Summaries Using Tweets and Videos
|
8 pages, 3 figures, 7 tables
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While popular televised events such as presidential debates or TV shows are
airing, people provide commentary on them in real-time. In this paper, we
propose a simple yet effective approach to combine social media commentary and
videos to create a multimedia summary of televised events. Our approach
identifies scenes from these events based on spikes of mentions of people
involved in the event and automatically selects tweets and frames from the
videos that occur during the time period of the spike that talk about and show
the people being discussed.
|
[
{
"created": "Wed, 16 Mar 2022 20:37:49 GMT",
"version": "v1"
}
] |
2022-03-18
|
[
[
"Andy",
"Anietie",
""
],
[
"Liu",
"Siyi",
""
],
[
"Ippolito",
"Daphne",
""
],
[
"Kriz",
"Reno",
""
],
[
"Callison-Burch",
"Chris",
""
],
[
"Wijaya",
"Derry",
""
]
] |
While popular televised events such as presidential debates or TV shows are airing, people provide commentary on them in real-time. In this paper, we propose a simple yet effective approach to combine social media commentary and videos to create a multimedia summary of televised events. Our approach identifies scenes from these events based on spikes of mentions of people involved in the event and automatically selects tweets and frames from the videos that occur during the time period of the spike that talk about and show the people being discussed.
|
1108.3636
|
EPTCS
|
Mathieu Roux (LMNO and GREYC, CNRS and University of Caen, France),
Brigitte Vall\'ee (GREYC, CNRS and University of Caen, France)
|
Information theory: Sources, Dirichlet series, and realistic analyses of
data structures
|
In Proceedings WORDS 2011, arXiv:1108.3412
|
EPTCS 63, 2011, pp. 199-214
|
10.4204/EPTCS.63.26
| null |
cs.IT cs.DM cs.DS math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the text algorithms build data structures on words, mainly trees, as
digital trees (tries) or binary search trees (bst). The mechanism which
produces symbols of the words (one symbol at each unit time) is called a
source, in information theory contexts. The probabilistic behaviour of the
trees built on words emitted by the same source depends on two factors: the
algorithmic properties of the tree, together with the information-theoretic
properties of the source. Very often, these two factors are considered in a too
simplified way: from the algorithmic point of view, the cost of the Bst is only
measured in terms of the number of comparisons between words --from the
information theoretic point of view, only simple sources (memoryless sources or
Markov chains) are studied.
We wish to perform here a realistic analysis, and we choose to deal together
with a general source and a realistic cost for data structures: we take into
account comparisons between symbols, and we consider a general model of source,
related to a dynamical system, which is called a dynamical source. Our methods
are close to analytic combinatorics, and our main object of interest is the
generating function of the source Lambda(s), which is here of Dirichlet type.
Such an object transforms probabilistic properties of the source into analytic
properties. The tameness of the source, which is defined through analytic
properties of Lambda(s), appears to be central in the analysis, and is
precisely studied for the class of dynamical sources. We focus here on
arithmetical conditions, of diophantine type, which are sufficient to imply
tameness on a domain with hyperbolic shape.
|
[
{
"created": "Thu, 18 Aug 2011 03:54:43 GMT",
"version": "v1"
}
] |
2011-08-19
|
[
[
"Roux",
"Mathieu",
"",
"LMNO and GREYC, CNRS and University of Caen, France"
],
[
"Vallée",
"Brigitte",
"",
"GREYC, CNRS and University of Caen, France"
]
] |
Most of the text algorithms build data structures on words, mainly trees, as digital trees (tries) or binary search trees (bst). The mechanism which produces symbols of the words (one symbol at each unit time) is called a source, in information theory contexts. The probabilistic behaviour of the trees built on words emitted by the same source depends on two factors: the algorithmic properties of the tree, together with the information-theoretic properties of the source. Very often, these two factors are considered in a too simplified way: from the algorithmic point of view, the cost of the Bst is only measured in terms of the number of comparisons between words --from the information theoretic point of view, only simple sources (memoryless sources or Markov chains) are studied. We wish to perform here a realistic analysis, and we choose to deal together with a general source and a realistic cost for data structures: we take into account comparisons between symbols, and we consider a general model of source, related to a dynamical system, which is called a dynamical source. Our methods are close to analytic combinatorics, and our main object of interest is the generating function of the source Lambda(s), which is here of Dirichlet type. Such an object transforms probabilistic properties of the source into analytic properties. The tameness of the source, which is defined through analytic properties of Lambda(s), appears to be central in the analysis, and is precisely studied for the class of dynamical sources. We focus here on arithmetical conditions, of diophantine type, which are sufficient to imply tameness on a domain with hyperbolic shape.
|
cs/0107024
|
Joseph O'Rourke
|
Erik D. Demaine, Martin L. Demaine, Anna Lubiw, Joseph O'Rourke
|
Enumerating Foldings and Unfoldings between Polygons and Polytopes
|
12 pages; 10 figures; 10 references. Revision of version in
Proceedings of the Japan Conference on Discrete and Computational Geometry,
Tokyo, Nov. 2000, pp. 9-12. See also cs.CG/0007019
|
Graphs and Combinatorics 18(1) 93-104 (2002)
| null | null |
cs.CG cs.DM
| null |
We pose and answer several questions concerning the number of ways to fold a
polygon to a polytope, and how many polytopes can be obtained from one polygon;
and the analogous questions for unfolding polytopes to polygons. Our answers
are, roughly: exponentially many, or nondenumerably infinite.
|
[
{
"created": "Wed, 18 Jul 2001 13:13:39 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Demaine",
"Erik D.",
""
],
[
"Demaine",
"Martin L.",
""
],
[
"Lubiw",
"Anna",
""
],
[
"O'Rourke",
"Joseph",
""
]
] |
We pose and answer several questions concerning the number of ways to fold a polygon to a polytope, and how many polytopes can be obtained from one polygon; and the analogous questions for unfolding polytopes to polygons. Our answers are, roughly: exponentially many, or nondenumerably infinite.
|
2001.11071
|
Ning Zhang
|
Ning Zhang, Yu Cao, Benyuan Liu, and Yan Luo
|
3D Aggregated Faster R-CNN for General Lesion Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lesions are damages and abnormalities in tissues of the human body. Many of
them can later turn into fatal diseases such as cancers. Detecting lesions are
of great importance for early diagnosis and timely treatment. To this end,
Computed Tomography (CT) scans often serve as the screening tool, allowing us
to leverage the modern object detection techniques to detect the lesions.
However, lesions in CT scans are often small and sparse. The local area of
lesions can be very confusing, leading the region based classifier branch of
Faster R-CNN easily fail. Therefore, most of the existing state-of-the-art
solutions train two types of heterogeneous networks (multi-phase) separately
for the candidate generation and the False Positive Reduction (FPR) purposes.
In this paper, we enforce an end-to-end 3D Aggregated Faster R-CNN solution by
stacking an "aggregated classifier branch" on the backbone of RPN. This
classifier branch is equipped with Feature Aggregation and Local Magnification
Layers to enhance the classifier branch. We demonstrate our model can achieve
the state of the art performance on both LUNA16 and DeepLesion dataset.
Especially, we achieve the best single-model FROC performance on LUNA16 with
the inference time being 4.2s per processed scan.
|
[
{
"created": "Wed, 29 Jan 2020 19:57:35 GMT",
"version": "v1"
}
] |
2020-01-31
|
[
[
"Zhang",
"Ning",
""
],
[
"Cao",
"Yu",
""
],
[
"Liu",
"Benyuan",
""
],
[
"Luo",
"Yan",
""
]
] |
Lesions are damages and abnormalities in tissues of the human body. Many of them can later turn into fatal diseases such as cancers. Detecting lesions are of great importance for early diagnosis and timely treatment. To this end, Computed Tomography (CT) scans often serve as the screening tool, allowing us to leverage the modern object detection techniques to detect the lesions. However, lesions in CT scans are often small and sparse. The local area of lesions can be very confusing, leading the region based classifier branch of Faster R-CNN easily fail. Therefore, most of the existing state-of-the-art solutions train two types of heterogeneous networks (multi-phase) separately for the candidate generation and the False Positive Reduction (FPR) purposes. In this paper, we enforce an end-to-end 3D Aggregated Faster R-CNN solution by stacking an "aggregated classifier branch" on the backbone of RPN. This classifier branch is equipped with Feature Aggregation and Local Magnification Layers to enhance the classifier branch. We demonstrate our model can achieve the state of the art performance on both LUNA16 and DeepLesion dataset. Especially, we achieve the best single-model FROC performance on LUNA16 with the inference time being 4.2s per processed scan.
|
2206.11866
|
Jo\~ao Vitorino
|
Jo\~ao Vitorino, Tiago Dias, Tiago Fonseca, Nuno Oliveira, Isabel
Pra\c{c}a
|
A Multi-Policy Framework for Deep Learning-Based Fake News Detection
|
10 pages, 1 table, 3 figures, DCAI 2022 conference
| null |
10.1007/978-3-031-20859-1_13
| null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Connectivity plays an ever-increasing role in modern society, with people all
around the world having easy access to rapidly disseminated information.
However, a more interconnected society enables the spread of intentionally
false information. To mitigate the negative impacts of fake news, it is
essential to improve detection methodologies. This work introduces Multi-Policy
Statement Checker (MPSC), a framework that automates fake news detection by
using deep learning techniques to analyze a statement itself and its related
news articles, predicting whether it is seemingly credible or suspicious. The
proposed framework was evaluated using four merged datasets containing real and
fake news. Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU) and
Bidirectional Encoder Representations from Transformers (BERT) models were
trained to utilize both lexical and syntactic features, and their performance
was evaluated. The obtained results demonstrate that a multi-policy analysis
reliably identifies suspicious statements, which can be advantageous for fake
news detection.
|
[
{
"created": "Wed, 1 Jun 2022 21:25:21 GMT",
"version": "v1"
}
] |
2023-02-03
|
[
[
"Vitorino",
"João",
""
],
[
"Dias",
"Tiago",
""
],
[
"Fonseca",
"Tiago",
""
],
[
"Oliveira",
"Nuno",
""
],
[
"Praça",
"Isabel",
""
]
] |
Connectivity plays an ever-increasing role in modern society, with people all around the world having easy access to rapidly disseminated information. However, a more interconnected society enables the spread of intentionally false information. To mitigate the negative impacts of fake news, it is essential to improve detection methodologies. This work introduces Multi-Policy Statement Checker (MPSC), a framework that automates fake news detection by using deep learning techniques to analyze a statement itself and its related news articles, predicting whether it is seemingly credible or suspicious. The proposed framework was evaluated using four merged datasets containing real and fake news. Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU) and Bidirectional Encoder Representations from Transformers (BERT) models were trained to utilize both lexical and syntactic features, and their performance was evaluated. The obtained results demonstrate that a multi-policy analysis reliably identifies suspicious statements, which can be advantageous for fake news detection.
|
0707.2185
|
Damien Chablat
|
Sylvain Guegan (IRCCyN), Wisama Khalil (IRCCyN), Damien Chablat
(IRCCyN), Philippe Wenger (IRCCyN)
|
Mod\'elisation Dynamique d'un Robot Parall\`ele \`a 3-DDL : l'Orthoglide
| null |
Conf\'erence Internationale Francophone d'Automatique (07/2002)
1-6
| null | null |
cs.RO
| null |
In this article, we propose a method for calculation of the inverse and
direct dynamic models of the Orthoglide, a parallel robot with threedegrees of
freedom in translation. These models are calculated starting from the elements
of the dynamic model of the kinematic chain structure and equations of
Newton-Euler applied to the platform. These models are obtained in explicit
form having an interesting physical interpretation.
|
[
{
"created": "Sun, 15 Jul 2007 07:14:51 GMT",
"version": "v1"
}
] |
2007-07-17
|
[
[
"Guegan",
"Sylvain",
"",
"IRCCyN"
],
[
"Khalil",
"Wisama",
"",
"IRCCyN"
],
[
"Chablat",
"Damien",
"",
"IRCCyN"
],
[
"Wenger",
"Philippe",
"",
"IRCCyN"
]
] |
In this article, we propose a method for calculation of the inverse and direct dynamic models of the Orthoglide, a parallel robot with threedegrees of freedom in translation. These models are calculated starting from the elements of the dynamic model of the kinematic chain structure and equations of Newton-Euler applied to the platform. These models are obtained in explicit form having an interesting physical interpretation.
|
1910.02390
|
Amber Nigam
|
Amber Nigam, Pragati Jaiswal, Uma Girkar, Teertha Arora, and Leo A.
Celi
|
Migration through Machine Learning Lens -- Predicting Sexual and
Reproductive Health Vulnerability of Young Migrants
|
Accepted for Machine Learning for Health (ML4H) at NeurIPS 2019 -
Extended Abstract
| null | null | null |
cs.LG cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we have discussed initial findings and results of our
experiment to predict sexual and reproductive health vulnerabilities of
migrants in a data-constrained environment. Notwithstanding the limited
research and data about migrants and migration cities, we propose a solution
that simultaneously focuses on data gathering from migrants, augmenting
awareness of the migrants to reduce mishaps, and setting up a mechanism to
present insights to the key stakeholders in migration to act upon. We have
designed a webapp for the stakeholders involved in migration: migrants, who
would participate in data gathering process and can also use the app for
getting to know safety and awareness tips based on analysis of the data
received; public health workers, who would have an access to the database of
migrants on the app; policy makers, who would have a greater understanding of
the ground reality, and of the patterns of migration through machine-learned
analysis. Finally, we have experimented with different machine learning models
on an artificially curated dataset. We have shown, through experiments, how
machine learning can assist in predicting the migrants at risk and can also
help in identifying the critical factors that make migration dangerous for
migrants. The results for identifying vulnerable migrants through machine
learning algorithms are statistically significant at an alpha of 0.05.
|
[
{
"created": "Sun, 6 Oct 2019 07:09:13 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Oct 2019 03:56:45 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Nov 2019 20:14:39 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Nov 2019 10:00:02 GMT",
"version": "v4"
}
] |
2019-11-25
|
[
[
"Nigam",
"Amber",
""
],
[
"Jaiswal",
"Pragati",
""
],
[
"Girkar",
"Uma",
""
],
[
"Arora",
"Teertha",
""
],
[
"Celi",
"Leo A.",
""
]
] |
In this paper, we have discussed initial findings and results of our experiment to predict sexual and reproductive health vulnerabilities of migrants in a data-constrained environment. Notwithstanding the limited research and data about migrants and migration cities, we propose a solution that simultaneously focuses on data gathering from migrants, augmenting awareness of the migrants to reduce mishaps, and setting up a mechanism to present insights to the key stakeholders in migration to act upon. We have designed a webapp for the stakeholders involved in migration: migrants, who would participate in data gathering process and can also use the app for getting to know safety and awareness tips based on analysis of the data received; public health workers, who would have an access to the database of migrants on the app; policy makers, who would have a greater understanding of the ground reality, and of the patterns of migration through machine-learned analysis. Finally, we have experimented with different machine learning models on an artificially curated dataset. We have shown, through experiments, how machine learning can assist in predicting the migrants at risk and can also help in identifying the critical factors that make migration dangerous for migrants. The results for identifying vulnerable migrants through machine learning algorithms are statistically significant at an alpha of 0.05.
|
2112.07019
|
Lennart Bamberg Dr.-Ing.
|
Lennart Bamberg, Arash Pourtaherian, Luc Waeijen, Anupam Chahar,
Orlando Moreira
|
Synapse Compression for Event-Based Convolutional-Neural-Network
Accelerators
|
Preprint accepted by the IEEE Transactions on Parallel and
Distributed Systems
| null | null | null |
cs.AR cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Manufacturing-viable neuromorphic chips require novel computer architectures
to achieve the massively parallel and efficient information processing the
brain supports so effortlessly. Emerging event-based architectures are making
this dream a reality. However, the large memory requirements for synaptic
connectivity are a showstopper for the execution of modern convolutional neural
networks (CNNs) on massively parallel, event-based (spiking) architectures.
This work overcomes this roadblock by contributing a lightweight hardware
scheme to compress the synaptic memory requirements by several thousand times,
enabling the execution of complex CNNs on a single chip of small form factor. A
silicon implementation in a 12-nm technology shows that the technique increases
the system's implementation cost by only 2%, despite achieving a total
memory-footprint reduction of up to 374x compared to the best previously
published technique.
|
[
{
"created": "Mon, 13 Dec 2021 21:14:35 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Dec 2021 23:05:11 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Jan 2023 09:19:17 GMT",
"version": "v3"
}
] |
2023-01-25
|
[
[
"Bamberg",
"Lennart",
""
],
[
"Pourtaherian",
"Arash",
""
],
[
"Waeijen",
"Luc",
""
],
[
"Chahar",
"Anupam",
""
],
[
"Moreira",
"Orlando",
""
]
] |
Manufacturing-viable neuromorphic chips require novel computer architectures to achieve the massively parallel and efficient information processing the brain supports so effortlessly. Emerging event-based architectures are making this dream a reality. However, the large memory requirements for synaptic connectivity are a showstopper for the execution of modern convolutional neural networks (CNNs) on massively parallel, event-based (spiking) architectures. This work overcomes this roadblock by contributing a lightweight hardware scheme to compress the synaptic memory requirements by several thousand times, enabling the execution of complex CNNs on a single chip of small form factor. A silicon implementation in a 12-nm technology shows that the technique increases the system's implementation cost by only 2%, despite achieving a total memory-footprint reduction of up to 374x compared to the best previously published technique.
|
2307.07354
|
Anna Bernasconi
|
Stefano Ceri, Anna Bernasconi, Alessia Gagliardi, Davide Martinenghi,
Luigi Bellomarini, Davide Magnanimi
|
PG-Triggers: Triggers for Property Graphs
|
13 pages, 5 figures, 4 tables
|
In Companion of the 2024 International Conference on Management of
Data (SIGMOD/PODS '24). Association for Computing Machinery, New York, NY,
USA, 373-385
|
10.1145/3626246.3653386
| null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Graph databases are emerging as the leading data management technology for
storing large knowledge graphs; significant efforts are ongoing to produce new
standards (such as the Graph Query Language, GQL), as well as enrich them with
properties, types, schemas, and keys. In this article, we introduce
PG-Triggers, a complete proposal for adding triggers to Property Graphs, along
the direction marked by the SQL3 Standard. We define the syntax and semantics
of PG-Triggers and then illustrate how they can be implemented on top of Neo4j,
one of the most popular graph databases. In particular, we introduce a
syntax-directed translation from PG-Triggers into Neo4j, which makes use of the
so-called {\it APOC triggers}; APOC is a community-contributed library for
augmenting the Cypher query language supported by Neo4j. We also cover
Memgraph, and show that our approach applies to this system in a similar way.
We illustrate the use of PG-Triggers through a life science application
inspired by the COVID-19 pandemic. The main objective of this article is to
introduce an active database standard for graph databases as a first-class
citizen at a time when reactive graph management is in its infancy, so as to
minimize the conversion efforts towards a full-fledged standard proposal.
|
[
{
"created": "Fri, 14 Jul 2023 14:02:20 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Feb 2024 13:42:50 GMT",
"version": "v2"
},
{
"created": "Mon, 10 Jun 2024 22:29:14 GMT",
"version": "v3"
}
] |
2024-06-12
|
[
[
"Ceri",
"Stefano",
""
],
[
"Bernasconi",
"Anna",
""
],
[
"Gagliardi",
"Alessia",
""
],
[
"Martinenghi",
"Davide",
""
],
[
"Bellomarini",
"Luigi",
""
],
[
"Magnanimi",
"Davide",
""
]
] |
Graph databases are emerging as the leading data management technology for storing large knowledge graphs; significant efforts are ongoing to produce new standards (such as the Graph Query Language, GQL), as well as enrich them with properties, types, schemas, and keys. In this article, we introduce PG-Triggers, a complete proposal for adding triggers to Property Graphs, along the direction marked by the SQL3 Standard. We define the syntax and semantics of PG-Triggers and then illustrate how they can be implemented on top of Neo4j, one of the most popular graph databases. In particular, we introduce a syntax-directed translation from PG-Triggers into Neo4j, which makes use of the so-called {\it APOC triggers}; APOC is a community-contributed library for augmenting the Cypher query language supported by Neo4j. We also cover Memgraph, and show that our approach applies to this system in a similar way. We illustrate the use of PG-Triggers through a life science application inspired by the COVID-19 pandemic. The main objective of this article is to introduce an active database standard for graph databases as a first-class citizen at a time when reactive graph management is in its infancy, so as to minimize the conversion efforts towards a full-fledged standard proposal.
|
2209.09660
|
Carlos Perez Galvan Dr
|
Imanol Arzac-Garmendia, Mattia Vallerio, Carlos Perez-Galvan and
Francisco J. Navarro-Brull
|
Industrial Data Science for Batch Manufacturing Processes
| null | null | null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Batch processes show several sources of variability, from raw materials'
properties to initial and evolving conditions that change during the different
events in the manufacturing process. In this chapter, we will illustrate with
an industrial example how to use machine learning to reduce this apparent
excess of data while maintaining the relevant information for process
engineers. Two common use cases will be presented: 1) AutoML analysis to
quickly find correlations in batch process data, and 2) trajectory analysis to
monitor and identify anomalous batches leading to process control improvements.
|
[
{
"created": "Tue, 20 Sep 2022 11:59:13 GMT",
"version": "v1"
}
] |
2022-09-21
|
[
[
"Arzac-Garmendia",
"Imanol",
""
],
[
"Vallerio",
"Mattia",
""
],
[
"Perez-Galvan",
"Carlos",
""
],
[
"Navarro-Brull",
"Francisco J.",
""
]
] |
Batch processes show several sources of variability, from raw materials' properties to initial and evolving conditions that change during the different events in the manufacturing process. In this chapter, we will illustrate with an industrial example how to use machine learning to reduce this apparent excess of data while maintaining the relevant information for process engineers. Two common use cases will be presented: 1) AutoML analysis to quickly find correlations in batch process data, and 2) trajectory analysis to monitor and identify anomalous batches leading to process control improvements.
|
2210.15131
|
Haohan Guo
|
Haohan Guo, Fenglong Xie, Xixin Wu, Hui Lu, Helen Meng
|
Towards High-Quality Neural TTS for Low-Resource Languages by Learning
Compact Speech Representations
|
Submitted to ICASSP 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to enhance low-resource TTS by reducing training data
requirements using compact speech representations. A Multi-Stage Multi-Codebook
(MSMC) VQ-GAN is trained to learn the representation, MSMCR, and decode it to
waveforms. Subsequently, we train the multi-stage predictor to predict MSMCRs
from the text for TTS synthesis. Moreover, we optimize the training strategy by
leveraging more audio to learn MSMCRs better for low-resource languages. It
selects audio from other languages using speaker similarity metric to augment
the training set, and applies transfer learning to improve training quality. In
MOS tests, the proposed system significantly outperforms FastSpeech and VITS in
standard and low-resource scenarios, showing lower data requirements. The
proposed training strategy effectively enhances MSMCRs on waveform
reconstruction. It improves TTS performance further, which wins 77% votes in
the preference test for the low-resource TTS with only 15 minutes of paired
data.
|
[
{
"created": "Thu, 27 Oct 2022 02:32:00 GMT",
"version": "v1"
}
] |
2022-10-28
|
[
[
"Guo",
"Haohan",
""
],
[
"Xie",
"Fenglong",
""
],
[
"Wu",
"Xixin",
""
],
[
"Lu",
"Hui",
""
],
[
"Meng",
"Helen",
""
]
] |
This paper aims to enhance low-resource TTS by reducing training data requirements using compact speech representations. A Multi-Stage Multi-Codebook (MSMC) VQ-GAN is trained to learn the representation, MSMCR, and decode it to waveforms. Subsequently, we train the multi-stage predictor to predict MSMCRs from the text for TTS synthesis. Moreover, we optimize the training strategy by leveraging more audio to learn MSMCRs better for low-resource languages. It selects audio from other languages using speaker similarity metric to augment the training set, and applies transfer learning to improve training quality. In MOS tests, the proposed system significantly outperforms FastSpeech and VITS in standard and low-resource scenarios, showing lower data requirements. The proposed training strategy effectively enhances MSMCRs on waveform reconstruction. It improves TTS performance further, which wins 77% votes in the preference test for the low-resource TTS with only 15 minutes of paired data.
|
1906.07328
|
Alex Cummaudo Mr
|
Alex Cummaudo, Rajesh Vasa, John Grundy, Mohamed Abdelrazek, Andrew
Cain
|
Losing Confidence in Quality: Unspoken Evolution of Computer Vision
Services
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in artificial intelligence (AI) and machine learning (ML),
such as computer vision, are now available as intelligent services and their
accessibility and simplicity is compelling. Multiple vendors now offer this
technology as cloud services and developers want to leverage these advances to
provide value to end-users. However, there is no firm investigation into the
maintenance and evolution risks arising from use of these intelligent services;
in particular, their behavioural consistency and transparency of their
functionality. We evaluated the responses of three different intelligent
services (specifically computer vision) over 11 months using 3 different data
sets, verifying responses against the respective documentation and assessing
evolution risk. We found that there are: (1) inconsistencies in how these
services behave; (2) evolution risk in the responses; and (3) a lack of clear
communication that documents these risks and inconsistencies. We propose a set
of recommendations to both developers and intelligent service providers to
inform risk and assist maintainability.
|
[
{
"created": "Tue, 18 Jun 2019 01:11:43 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jul 2019 23:51:03 GMT",
"version": "v2"
}
] |
2019-08-01
|
[
[
"Cummaudo",
"Alex",
""
],
[
"Vasa",
"Rajesh",
""
],
[
"Grundy",
"John",
""
],
[
"Abdelrazek",
"Mohamed",
""
],
[
"Cain",
"Andrew",
""
]
] |
Recent advances in artificial intelligence (AI) and machine learning (ML), such as computer vision, are now available as intelligent services and their accessibility and simplicity is compelling. Multiple vendors now offer this technology as cloud services and developers want to leverage these advances to provide value to end-users. However, there is no firm investigation into the maintenance and evolution risks arising from use of these intelligent services; in particular, their behavioural consistency and transparency of their functionality. We evaluated the responses of three different intelligent services (specifically computer vision) over 11 months using 3 different data sets, verifying responses against the respective documentation and assessing evolution risk. We found that there are: (1) inconsistencies in how these services behave; (2) evolution risk in the responses; and (3) a lack of clear communication that documents these risks and inconsistencies. We propose a set of recommendations to both developers and intelligent service providers to inform risk and assist maintainability.
|
1901.04321
|
Thom Lake
|
Thom Lake, Sinead A. Williamson, Alexander T. Hawk, Christopher C.
Johnson, Benjamin P. Wing
|
Large-scale Collaborative Filtering with Product Embeddings
|
15 pages, 5 figures
| null | null | null |
cs.IR cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The application of machine learning techniques to large-scale personalized
recommendation problems is a challenging task. Such systems must make sense of
enormous amounts of implicit feedback in order to understand user preferences
across numerous product categories. This paper presents a deep learning based
solution to this problem within the collaborative filtering with implicit
feedback framework. Our approach combines neural attention mechanisms, which
allow for context dependent weighting of past behavioral signals, with
representation learning techniques to produce models which obtain extremely
high coverage, can easily incorporate new information as it becomes available,
and are computationally efficient. Offline experiments demonstrate significant
performance improvements when compared to several alternative methods from the
literature. Results from an online setting show that the approach compares
favorably with current production techniques used to produce personalized
product recommendations.
|
[
{
"created": "Fri, 11 Jan 2019 17:28:59 GMT",
"version": "v1"
}
] |
2019-01-15
|
[
[
"Lake",
"Thom",
""
],
[
"Williamson",
"Sinead A.",
""
],
[
"Hawk",
"Alexander T.",
""
],
[
"Johnson",
"Christopher C.",
""
],
[
"Wing",
"Benjamin P.",
""
]
] |
The application of machine learning techniques to large-scale personalized recommendation problems is a challenging task. Such systems must make sense of enormous amounts of implicit feedback in order to understand user preferences across numerous product categories. This paper presents a deep learning based solution to this problem within the collaborative filtering with implicit feedback framework. Our approach combines neural attention mechanisms, which allow for context dependent weighting of past behavioral signals, with representation learning techniques to produce models which obtain extremely high coverage, can easily incorporate new information as it becomes available, and are computationally efficient. Offline experiments demonstrate significant performance improvements when compared to several alternative methods from the literature. Results from an online setting show that the approach compares favorably with current production techniques used to produce personalized product recommendations.
|
2405.20590
|
Junzhi Wen
|
Junzhi Wen, Rafal A. Angryk
|
Class-Based Time Series Data Augmentation to Mitigate Extreme Class
Imbalance for Solar Flare Prediction
| null | null | null | null |
cs.LG astro-ph.IM astro-ph.SR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Time series data plays a crucial role across various domains, making it
valuable for decision-making and predictive modeling. Machine learning (ML) and
deep learning (DL) have shown promise in this regard, yet their performance
hinges on data quality and quantity, often constrained by data scarcity and
class imbalance, particularly for rare events like solar flares. Data
augmentation techniques offer a potential solution to address these challenges,
yet their effectiveness on multivariate time series datasets remains
underexplored. In this study, we propose a novel data augmentation method for
time series data named Mean Gaussian Noise (MGN). We investigate the
performance of MGN compared to eight existing basic data augmentation methods
on a multivariate time series dataset for solar flare prediction, SWAN-SF,
using a ML algorithm for time series data, TimeSeriesSVC. The results
demonstrate the efficacy of MGN and highlight its potential for improving
classification performance in scenarios with extremely imbalanced data. Our
time complexity analysis shows that MGN also has a competitive computational
cost compared to the investigated alternative methods.
|
[
{
"created": "Fri, 31 May 2024 03:03:19 GMT",
"version": "v1"
}
] |
2024-06-03
|
[
[
"Wen",
"Junzhi",
""
],
[
"Angryk",
"Rafal A.",
""
]
] |
Time series data plays a crucial role across various domains, making it valuable for decision-making and predictive modeling. Machine learning (ML) and deep learning (DL) have shown promise in this regard, yet their performance hinges on data quality and quantity, often constrained by data scarcity and class imbalance, particularly for rare events like solar flares. Data augmentation techniques offer a potential solution to address these challenges, yet their effectiveness on multivariate time series datasets remains underexplored. In this study, we propose a novel data augmentation method for time series data named Mean Gaussian Noise (MGN). We investigate the performance of MGN compared to eight existing basic data augmentation methods on a multivariate time series dataset for solar flare prediction, SWAN-SF, using a ML algorithm for time series data, TimeSeriesSVC. The results demonstrate the efficacy of MGN and highlight its potential for improving classification performance in scenarios with extremely imbalanced data. Our time complexity analysis shows that MGN also has a competitive computational cost compared to the investigated alternative methods.
|
2304.05468
|
Nikola Milo\v{s}evi\'c Dr
|
Ulfeta A. Marovac, Aldina R. Avdi\'c, Nikola Lj. Milo\v{s}evi\'c
|
A Survey of Resources and Methods for Natural Language Processing of
Serbian Language
|
43 pages, submitted to Artificial Intelligence Review Journal
| null | null | null |
cs.CL cs.DL cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Serbian language is a Slavic language spoken by over 12 million speakers
and well understood by over 15 million people. In the area of natural language
processing, it can be considered a low-resourced language. Also, Serbian is
considered a high-inflectional language. The combination of many word
inflections and low availability of language resources makes natural language
processing of Serbian challenging. Nevertheless, over the past three decades,
there have been a number of initiatives to develop resources and methods for
natural language processing of Serbian, ranging from developing a corpus of
free text from books and the internet, annotated corpora for classification and
named entity recognition tasks to various methods and models performing these
tasks. In this paper, we review the initiatives, resources, methods, and their
availability.
|
[
{
"created": "Tue, 11 Apr 2023 19:33:41 GMT",
"version": "v1"
}
] |
2023-04-13
|
[
[
"Marovac",
"Ulfeta A.",
""
],
[
"Avdić",
"Aldina R.",
""
],
[
"Milošević",
"Nikola Lj.",
""
]
] |
The Serbian language is a Slavic language spoken by over 12 million speakers and well understood by over 15 million people. In the area of natural language processing, it can be considered a low-resourced language. Also, Serbian is considered a high-inflectional language. The combination of many word inflections and low availability of language resources makes natural language processing of Serbian challenging. Nevertheless, over the past three decades, there have been a number of initiatives to develop resources and methods for natural language processing of Serbian, ranging from developing a corpus of free text from books and the internet, annotated corpora for classification and named entity recognition tasks to various methods and models performing these tasks. In this paper, we review the initiatives, resources, methods, and their availability.
|
2204.04775
|
Saadullah Amin
|
Saadullah Amin, Noon Pokaratsiri Goldstein, Morgan Kelly Wixted,
Alejandro Garc\'ia-Rudolph, Catalina Mart\'inez-Costa, G\"unter Neumann
|
Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of
Code-Mixed Clinical Texts
|
Accepted by BioNLP'22
| null | null | null |
cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the advances in digital healthcare systems offering curated
structured knowledge, much of the critical information still lies in large
volumes of unlabeled and unstructured clinical texts. These texts, which often
contain protected health information (PHI), are exposed to information
extraction tools for downstream applications, risking patient identification.
Existing works in de-identification rely on using large-scale annotated corpora
in English, which often are not suitable in real-world multilingual settings.
Pre-trained language models (LM) have shown great potential for cross-lingual
transfer in low-resource settings. In this work, we empirically show the
few-shot cross-lingual transfer property of LMs for named entity recognition
(NER) and apply it to solve a low-resource and real-world challenge of
code-mixed (Spanish-Catalan) clinical notes de-identification in the stroke
domain. We annotate a gold evaluation dataset to assess few-shot setting
performance where we only use a few hundred labeled examples for training. Our
model improves the zero-shot F1-score from 73.7% to 91.2% on the gold
evaluation set when adapting Multilingual BERT (mBERT) (Devlin et al., 2019)
from the MEDDOCAN (Marimon et al., 2019) corpus with our few-shot cross-lingual
target corpus. When generalized to an out-of-sample test set, the best model
achieves a human-evaluation F1-score of 97.2%.
|
[
{
"created": "Sun, 10 Apr 2022 21:46:52 GMT",
"version": "v1"
}
] |
2022-04-12
|
[
[
"Amin",
"Saadullah",
""
],
[
"Goldstein",
"Noon Pokaratsiri",
""
],
[
"Wixted",
"Morgan Kelly",
""
],
[
"García-Rudolph",
"Alejandro",
""
],
[
"Martínez-Costa",
"Catalina",
""
],
[
"Neumann",
"Günter",
""
]
] |
Despite the advances in digital healthcare systems offering curated structured knowledge, much of the critical information still lies in large volumes of unlabeled and unstructured clinical texts. These texts, which often contain protected health information (PHI), are exposed to information extraction tools for downstream applications, risking patient identification. Existing works in de-identification rely on using large-scale annotated corpora in English, which often are not suitable in real-world multilingual settings. Pre-trained language models (LM) have shown great potential for cross-lingual transfer in low-resource settings. In this work, we empirically show the few-shot cross-lingual transfer property of LMs for named entity recognition (NER) and apply it to solve a low-resource and real-world challenge of code-mixed (Spanish-Catalan) clinical notes de-identification in the stroke domain. We annotate a gold evaluation dataset to assess few-shot setting performance where we only use a few hundred labeled examples for training. Our model improves the zero-shot F1-score from 73.7% to 91.2% on the gold evaluation set when adapting Multilingual BERT (mBERT) (Devlin et al., 2019) from the MEDDOCAN (Marimon et al., 2019) corpus with our few-shot cross-lingual target corpus. When generalized to an out-of-sample test set, the best model achieves a human-evaluation F1-score of 97.2%.
|
1902.09722
|
Tatsuya Shiraishi
|
Tatsuya Shiraishi, Tam Le, Hisashi Kashima, Makoto Yamada
|
Topological Bayesian Optimization with Persistence Diagrams
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding an optimal parameter of a black-box function is important for
searching stable material structures and finding optimal neural network
structures, and Bayesian optimization algorithms are widely used for the
purpose. However, most of existing Bayesian optimization algorithms can only
handle vector data and cannot handle complex structured data. In this paper, we
propose the topological Bayesian optimization, which can efficiently find an
optimal solution from structured data using \emph{topological information}.
More specifically, in order to apply Bayesian optimization to structured data,
we extract useful topological information from a structure and measure the
proper similarity between structures. To this end, we utilize persistent
homology, which is a topological data analysis method that was recently applied
in machine learning. Moreover, we propose the Bayesian optimization algorithm
that can handle multiple types of topological information by using a linear
combination of kernels for persistence diagrams. Through experiments, we show
that topological information extracted by persistent homology contributes to a
more efficient search for optimal structures compared to the random search
baseline and the graph Bayesian optimization algorithm.
|
[
{
"created": "Tue, 26 Feb 2019 04:13:07 GMT",
"version": "v1"
}
] |
2019-02-27
|
[
[
"Shiraishi",
"Tatsuya",
""
],
[
"Le",
"Tam",
""
],
[
"Kashima",
"Hisashi",
""
],
[
"Yamada",
"Makoto",
""
]
] |
Finding an optimal parameter of a black-box function is important for searching stable material structures and finding optimal neural network structures, and Bayesian optimization algorithms are widely used for the purpose. However, most of existing Bayesian optimization algorithms can only handle vector data and cannot handle complex structured data. In this paper, we propose the topological Bayesian optimization, which can efficiently find an optimal solution from structured data using \emph{topological information}. More specifically, in order to apply Bayesian optimization to structured data, we extract useful topological information from a structure and measure the proper similarity between structures. To this end, we utilize persistent homology, which is a topological data analysis method that was recently applied in machine learning. Moreover, we propose the Bayesian optimization algorithm that can handle multiple types of topological information by using a linear combination of kernels for persistence diagrams. Through experiments, we show that topological information extracted by persistent homology contributes to a more efficient search for optimal structures compared to the random search baseline and the graph Bayesian optimization algorithm.
|
1408.6282
|
Thomas Pajor
|
Edith Cohen, Daniel Delling, Thomas Pajor, Renato F. Werneck
|
Sketch-based Influence Maximization and Computation: Scaling up with
Guarantees
|
10 pages, 5 figures. Appeared at the 23rd Conference on Information
and Knowledge Management (CIKM 2014) in Shanghai, China
| null |
10.1145/2661829.2662077
| null |
cs.DS cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Propagation of contagion through networks is a fundamental process. It is
used to model the spread of information, influence, or a viral infection.
Diffusion patterns can be specified by a probabilistic model, such as
Independent Cascade (IC), or captured by a set of representative traces.
Basic computational problems in the study of diffusion are influence queries
(determining the potency of a specified seed set of nodes) and Influence
Maximization (identifying the most influential seed set of a given size).
Answering each influence query involves many edge traversals, and does not
scale when there are many queries on very large graphs. The gold standard for
Influence Maximization is the greedy algorithm, which iteratively adds to the
seed set a node maximizing the marginal gain in influence. Greedy has a
guaranteed approximation ratio of at least (1-1/e) and actually produces a
sequence of nodes, with each prefix having approximation guarantee with respect
to the same-size optimum. Since Greedy does not scale well beyond a few million
edges, for larger inputs one must currently use either heuristics or
alternative algorithms designed for a pre-specified small seed set size.
We develop a novel sketch-based design for influence computation. Our greedy
Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with
billions of edges, with one to two orders of magnitude speedup over the best
greedy methods. It still has a guaranteed approximation ratio, and in practice
its quality nearly matches that of exact greedy. We also present influence
oracles, which use linear-time preprocessing to generate a small sketch for
each node, allowing the influence of any seed set to be quickly answered from
the sketches of its nodes.
|
[
{
"created": "Tue, 26 Aug 2014 23:48:19 GMT",
"version": "v1"
}
] |
2014-08-28
|
[
[
"Cohen",
"Edith",
""
],
[
"Delling",
"Daniel",
""
],
[
"Pajor",
"Thomas",
""
],
[
"Werneck",
"Renato F.",
""
]
] |
Propagation of contagion through networks is a fundamental process. It is used to model the spread of information, influence, or a viral infection. Diffusion patterns can be specified by a probabilistic model, such as Independent Cascade (IC), or captured by a set of representative traces. Basic computational problems in the study of diffusion are influence queries (determining the potency of a specified seed set of nodes) and Influence Maximization (identifying the most influential seed set of a given size). Answering each influence query involves many edge traversals, and does not scale when there are many queries on very large graphs. The gold standard for Influence Maximization is the greedy algorithm, which iteratively adds to the seed set a node maximizing the marginal gain in influence. Greedy has a guaranteed approximation ratio of at least (1-1/e) and actually produces a sequence of nodes, with each prefix having approximation guarantee with respect to the same-size optimum. Since Greedy does not scale well beyond a few million edges, for larger inputs one must currently use either heuristics or alternative algorithms designed for a pre-specified small seed set size. We develop a novel sketch-based design for influence computation. Our greedy Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with billions of edges, with one to two orders of magnitude speedup over the best greedy methods. It still has a guaranteed approximation ratio, and in practice its quality nearly matches that of exact greedy. We also present influence oracles, which use linear-time preprocessing to generate a small sketch for each node, allowing the influence of any seed set to be quickly answered from the sketches of its nodes.
|
2010.08210
|
Xue Mengge
|
Mengge Xue, Bowen Yu, Zhenyu Zhang, Tingwen Liu, Yue Zhang, Bin Wang
|
Coarse-to-Fine Pre-training for Named Entity Recognition
| null |
EMNLP 2020
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
More recently, Named Entity Recognition hasachieved great advances aided by
pre-trainingapproaches such as BERT. However, currentpre-training techniques
focus on building lan-guage modeling objectives to learn a gen-eral
representation, ignoring the named entity-related knowledge. To this end, we
proposea NER-specific pre-training framework to in-ject coarse-to-fine
automatically mined entityknowledge into pre-trained models. Specifi-cally, we
first warm-up the model via an en-tity span identification task by training it
withWikipedia anchors, which can be deemed asgeneral-typed entities. Then we
leverage thegazetteer-based distant supervision strategy totrain the model
extract coarse-grained typedentities. Finally, we devise a
self-supervisedauxiliary task to mine the fine-grained namedentity knowledge
via clustering.Empiricalstudies on three public NER datasets demon-strate that
our framework achieves significantimprovements against several pre-trained
base-lines, establishing the new state-of-the-art per-formance on three
benchmarks. Besides, weshow that our framework gains promising re-sults without
using human-labeled trainingdata, demonstrating its effectiveness in label-few
and low-resource scenarios
|
[
{
"created": "Fri, 16 Oct 2020 07:39:20 GMT",
"version": "v1"
}
] |
2020-10-29
|
[
[
"Xue",
"Mengge",
""
],
[
"Yu",
"Bowen",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Liu",
"Tingwen",
""
],
[
"Zhang",
"Yue",
""
],
[
"Wang",
"Bin",
""
]
] |
More recently, Named Entity Recognition hasachieved great advances aided by pre-trainingapproaches such as BERT. However, currentpre-training techniques focus on building lan-guage modeling objectives to learn a gen-eral representation, ignoring the named entity-related knowledge. To this end, we proposea NER-specific pre-training framework to in-ject coarse-to-fine automatically mined entityknowledge into pre-trained models. Specifi-cally, we first warm-up the model via an en-tity span identification task by training it withWikipedia anchors, which can be deemed asgeneral-typed entities. Then we leverage thegazetteer-based distant supervision strategy totrain the model extract coarse-grained typedentities. Finally, we devise a self-supervisedauxiliary task to mine the fine-grained namedentity knowledge via clustering.Empiricalstudies on three public NER datasets demon-strate that our framework achieves significantimprovements against several pre-trained base-lines, establishing the new state-of-the-art per-formance on three benchmarks. Besides, weshow that our framework gains promising re-sults without using human-labeled trainingdata, demonstrating its effectiveness in label-few and low-resource scenarios
|
1708.08016
|
Viraj Mavani
|
Viraj Mavani, Shanmuganathan Raman, Krishna P Miyapuram
|
Facial Expression Recognition using Visual Saliency and Deep Learning
|
6 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We have developed a convolutional neural network for the purpose of
recognizing facial expressions in human beings. We have fine-tuned the existing
convolutional neural network model trained on the visual recognition dataset
used in the ILSVRC2012 to two widely used facial expression datasets - CFEE and
RaFD, which when trained and tested independently yielded test accuracies of
74.79% and 95.71%, respectively. Generalization of results was evident by
training on one dataset and testing on the other. Further, the image product of
the cropped faces and their visual saliency maps were computed using Deep
Multi-Layer Network for saliency prediction and were fed to the facial
expression recognition CNN. In the most generalized experiment, we observed the
top-1 accuracy in the test set to be 65.39%. General confusion trends between
different facial expressions as exhibited by humans were also observed.
|
[
{
"created": "Sat, 26 Aug 2017 20:03:38 GMT",
"version": "v1"
}
] |
2017-08-29
|
[
[
"Mavani",
"Viraj",
""
],
[
"Raman",
"Shanmuganathan",
""
],
[
"Miyapuram",
"Krishna P",
""
]
] |
We have developed a convolutional neural network for the purpose of recognizing facial expressions in human beings. We have fine-tuned the existing convolutional neural network model trained on the visual recognition dataset used in the ILSVRC2012 to two widely used facial expression datasets - CFEE and RaFD, which when trained and tested independently yielded test accuracies of 74.79% and 95.71%, respectively. Generalization of results was evident by training on one dataset and testing on the other. Further, the image product of the cropped faces and their visual saliency maps were computed using Deep Multi-Layer Network for saliency prediction and were fed to the facial expression recognition CNN. In the most generalized experiment, we observed the top-1 accuracy in the test set to be 65.39%. General confusion trends between different facial expressions as exhibited by humans were also observed.
|
2209.06888
|
Ana Huaman Quispe
|
Ana Huam\'an Quispe and Stephen Hart and Seth Gee and Robert R.
Burridge
|
ADAMANT: A Pipeline for Adaptable Manipulation Tasks
|
Preprint. In review
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents ADAMANT, a set of software modules that provides grasp
planning capabilities to an existing robot planning and control software
framework. Our presented work allows a user to adapt a manipulation task to be
used under widely different scenarios with minimal user input, thus reducing
the operator's cognitive load. The developed tools include (1) plugin-based
components that make it easy to extend default capabilities and to use
third-party grasp libraries, (2) An object-centric way to define task
constraints, (3) A user-friendly Rviz interface to use the grasp planner
utilities, and (4) Interactive tools to use perception data to program a task.
We tested our framework on a wide variety of robot simulations.
|
[
{
"created": "Wed, 14 Sep 2022 19:20:07 GMT",
"version": "v1"
}
] |
2022-09-16
|
[
[
"Quispe",
"Ana Huamán",
""
],
[
"Hart",
"Stephen",
""
],
[
"Gee",
"Seth",
""
],
[
"Burridge",
"Robert R.",
""
]
] |
This paper presents ADAMANT, a set of software modules that provides grasp planning capabilities to an existing robot planning and control software framework. Our presented work allows a user to adapt a manipulation task to be used under widely different scenarios with minimal user input, thus reducing the operator's cognitive load. The developed tools include (1) plugin-based components that make it easy to extend default capabilities and to use third-party grasp libraries, (2) An object-centric way to define task constraints, (3) A user-friendly Rviz interface to use the grasp planner utilities, and (4) Interactive tools to use perception data to program a task. We tested our framework on a wide variety of robot simulations.
|
2301.12305
|
Wilka Carvalho
|
Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak lee, and
Satinder Singh
|
Composing Task Knowledge with Modular Successor Feature Approximators
|
Accepted to ICLR 2023
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the Successor Features and Generalized Policy Improvement (SF&GPI)
framework has been proposed as a method for learning, composing, and
transferring predictive knowledge and behavior. SF&GPI works by having an agent
learn predictive representations (SFs) that can be combined for transfer to new
tasks with GPI. However, to be effective this approach requires state features
that are useful to predict, and these state-features are typically
hand-designed. In this work, we present a novel neural network architecture,
"Modular Successor Feature Approximators" (MSFA), where modules both discover
what is useful to predict, and learn their own predictive representations. We
show that MSFA is able to better generalize compared to baseline architectures
for learning SFs and modular architectures
|
[
{
"created": "Sat, 28 Jan 2023 23:04:07 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Aug 2023 18:28:51 GMT",
"version": "v2"
}
] |
2023-08-29
|
[
[
"Carvalho",
"Wilka",
""
],
[
"Filos",
"Angelos",
""
],
[
"Lewis",
"Richard L.",
""
],
[
"lee",
"Honglak",
""
],
[
"Singh",
"Satinder",
""
]
] |
Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior. SF&GPI works by having an agent learn predictive representations (SFs) that can be combined for transfer to new tasks with GPI. However, to be effective this approach requires state features that are useful to predict, and these state-features are typically hand-designed. In this work, we present a novel neural network architecture, "Modular Successor Feature Approximators" (MSFA), where modules both discover what is useful to predict, and learn their own predictive representations. We show that MSFA is able to better generalize compared to baseline architectures for learning SFs and modular architectures
|
2106.09343
|
Dominik Mach\'a\v{c}ek
|
Dominik Mach\'a\v{c}ek, Mat\'u\v{s} \v{Z}ilinec, Ond\v{r}ej Bojar
|
Lost in Interpreting: Speech Translation from Source or Interpreter?
|
to be published at INTERSPEECH 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Interpreters facilitate multi-lingual meetings but the affordable set of
languages is often smaller than what is needed. Automatic simultaneous speech
translation can extend the set of provided languages. We investigate if such an
automatic system should rather follow the original speaker, or an interpreter
to achieve better translation quality at the cost of increased delay.
To answer the question, we release Europarl Simultaneous Interpreting Corpus
(ESIC), 10 hours of recordings and transcripts of European Parliament speeches
in English, with simultaneous interpreting into Czech and German. We evaluate
quality and latency of speaker-based and interpreter-based spoken translation
systems from English to Czech. We study the differences in implicit
simplification and summarization of the human interpreter compared to a machine
translation system trained to shorten the output to some extent. Finally, we
perform human evaluation to measure information loss of each of these
approaches.
|
[
{
"created": "Thu, 17 Jun 2021 09:32:49 GMT",
"version": "v1"
}
] |
2021-06-18
|
[
[
"Macháček",
"Dominik",
""
],
[
"Žilinec",
"Matúš",
""
],
[
"Bojar",
"Ondřej",
""
]
] |
Interpreters facilitate multi-lingual meetings but the affordable set of languages is often smaller than what is needed. Automatic simultaneous speech translation can extend the set of provided languages. We investigate if such an automatic system should rather follow the original speaker, or an interpreter to achieve better translation quality at the cost of increased delay. To answer the question, we release Europarl Simultaneous Interpreting Corpus (ESIC), 10 hours of recordings and transcripts of European Parliament speeches in English, with simultaneous interpreting into Czech and German. We evaluate quality and latency of speaker-based and interpreter-based spoken translation systems from English to Czech. We study the differences in implicit simplification and summarization of the human interpreter compared to a machine translation system trained to shorten the output to some extent. Finally, we perform human evaluation to measure information loss of each of these approaches.
|
2008.09336
|
Francesco Malandrino
|
Francesco Malandrino and Carla Fabiana Chiasserini and Gian Michele
Dell'Aera
|
An Edge-powered Approach to Assisted Driving
|
GLOBECOM 2020
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automotive services for connected vehicles are one of the main fields of
application for new-generation mobile networks as well as for the edge
computing paradigm. In this paper, we investigate a system architecture that
integrates the distributed vehicular network with the network edge, with the
aim to optimize the vehicle travel times. We then present a queue-based system
model that permits the optimization of the vehicle flows, and we show its
applicability to two relevant services, namely, lane change/merge
(representative of cooperative assisted driving) and navigation. Furthermore,
we introduce an efficient algorithm called Bottleneck Hunting (BH), able to
formulate high-quality flow policies in linear time. We assess the performance
of the proposed system architecture and of BH through a comprehensive and
realistic simulation framework, combining ns-3 and SUMO. The results, derived
under real-world scenarios, show that our solution provides much shorter travel
times than when decisions are made by individual vehicles.
|
[
{
"created": "Fri, 21 Aug 2020 07:03:15 GMT",
"version": "v1"
}
] |
2020-08-24
|
[
[
"Malandrino",
"Francesco",
""
],
[
"Chiasserini",
"Carla Fabiana",
""
],
[
"Dell'Aera",
"Gian Michele",
""
]
] |
Automotive services for connected vehicles are one of the main fields of application for new-generation mobile networks as well as for the edge computing paradigm. In this paper, we investigate a system architecture that integrates the distributed vehicular network with the network edge, with the aim to optimize the vehicle travel times. We then present a queue-based system model that permits the optimization of the vehicle flows, and we show its applicability to two relevant services, namely, lane change/merge (representative of cooperative assisted driving) and navigation. Furthermore, we introduce an efficient algorithm called Bottleneck Hunting (BH), able to formulate high-quality flow policies in linear time. We assess the performance of the proposed system architecture and of BH through a comprehensive and realistic simulation framework, combining ns-3 and SUMO. The results, derived under real-world scenarios, show that our solution provides much shorter travel times than when decisions are made by individual vehicles.
|
2109.07864
|
Maksym Del
|
Maksym Del, Elizaveta Korotkova, Mark Fishel
|
Translation Transformers Rediscover Inherent Data Domains
|
Accepted at WMT21; 15 pages, 7 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Many works proposed methods to improve the performance of Neural Machine
Translation (NMT) models in a domain/multi-domain adaptation scenario. However,
an understanding of how NMT baselines represent text domain information
internally is still lacking. Here we analyze the sentence representations
learned by NMT Transformers and show that these explicitly include the
information on text domains, even after only seeing the input sentences without
domains labels. Furthermore, we show that this internal information is enough
to cluster sentences by their underlying domains without supervision. We show
that NMT models produce clusters better aligned to the actual domains compared
to pre-trained language models (LMs). Notably, when computed on document-level,
NMT cluster-to-domain correspondence nears 100%. We use these findings together
with an approach to NMT domain adaptation using automatically extracted
domains. Whereas previous work relied on external LMs for text clustering, we
propose re-using the NMT model as a source of unsupervised clusters. We perform
an extensive experimental study comparing two approaches across two data
scenarios, three language pairs, and both sentence-level and document-level
clustering, showing equal or significantly superior performance compared to
LMs.
|
[
{
"created": "Thu, 16 Sep 2021 10:58:13 GMT",
"version": "v1"
}
] |
2021-09-17
|
[
[
"Del",
"Maksym",
""
],
[
"Korotkova",
"Elizaveta",
""
],
[
"Fishel",
"Mark",
""
]
] |
Many works proposed methods to improve the performance of Neural Machine Translation (NMT) models in a domain/multi-domain adaptation scenario. However, an understanding of how NMT baselines represent text domain information internally is still lacking. Here we analyze the sentence representations learned by NMT Transformers and show that these explicitly include the information on text domains, even after only seeing the input sentences without domains labels. Furthermore, we show that this internal information is enough to cluster sentences by their underlying domains without supervision. We show that NMT models produce clusters better aligned to the actual domains compared to pre-trained language models (LMs). Notably, when computed on document-level, NMT cluster-to-domain correspondence nears 100%. We use these findings together with an approach to NMT domain adaptation using automatically extracted domains. Whereas previous work relied on external LMs for text clustering, we propose re-using the NMT model as a source of unsupervised clusters. We perform an extensive experimental study comparing two approaches across two data scenarios, three language pairs, and both sentence-level and document-level clustering, showing equal or significantly superior performance compared to LMs.
|
2407.06947
|
Gijs Wijngaard
|
Gijs Wijngaard, Elia Formisano, Michele Esposito, Michel Dumontier
|
Audio-Language Datasets of Scenes and Events: A Survey
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio-language models (ALMs) process sounds to provide a linguistic
description of sound-producing events and scenes. Recent advances in computing
power and dataset creation have led to significant progress in this domain.
This paper surveys existing datasets used for training audio-language models,
emphasizing the recent trend towards using large, diverse datasets to enhance
model performance. Key sources of these datasets include the Freesound platform
and AudioSet that have contributed to the field's rapid growth. Although prior
surveys primarily address techniques and training details, this survey
categorizes and evaluates a wide array of datasets, addressing their origins,
characteristics, and use cases. It also performs a data leak analysis to ensure
dataset integrity and mitigate bias between datasets. This survey was conducted
by analyzing research papers up to and including December 2023, and does not
contain any papers after that period.
|
[
{
"created": "Tue, 9 Jul 2024 15:23:35 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Wijngaard",
"Gijs",
""
],
[
"Formisano",
"Elia",
""
],
[
"Esposito",
"Michele",
""
],
[
"Dumontier",
"Michel",
""
]
] |
Audio-language models (ALMs) process sounds to provide a linguistic description of sound-producing events and scenes. Recent advances in computing power and dataset creation have led to significant progress in this domain. This paper surveys existing datasets used for training audio-language models, emphasizing the recent trend towards using large, diverse datasets to enhance model performance. Key sources of these datasets include the Freesound platform and AudioSet that have contributed to the field's rapid growth. Although prior surveys primarily address techniques and training details, this survey categorizes and evaluates a wide array of datasets, addressing their origins, characteristics, and use cases. It also performs a data leak analysis to ensure dataset integrity and mitigate bias between datasets. This survey was conducted by analyzing research papers up to and including December 2023, and does not contain any papers after that period.
|
2002.07890
|
Yongyong Wei
|
Yongyong Wei, Rong Zheng
|
Informative Path Planning for Mobile Sensing with Reinforcement Learning
|
To appear at IEEE INFOCOM 2020
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale spatial data such as air quality, thermal conditions and location
signatures play a vital role in a variety of applications. Collecting such data
manually can be tedious and labour intensive. With the advancement of robotic
technologies, it is feasible to automate such tasks using mobile robots with
sensing and navigation capabilities. However, due to limited battery lifetime
and scarcity of charging stations, it is important to plan paths for the robots
that maximize the utility of data collection, also known as the informative
path planning (IPP) problem. In this paper, we propose a novel IPP algorithm
using reinforcement learning (RL). A constrained exploration and exploitation
strategy is designed to address the unique challenges of IPP, and is shown to
have fast convergence and better optimality than a classical reinforcement
learning approach. Extensive experiments using real-world measurement data
demonstrate that the proposed algorithm outperforms state-of-the-art algorithms
in most test cases. Interestingly, unlike existing solutions that have to be
re-executed when any input parameter changes, our RL-based solution allows a
degree of transferability across different problem instances.
|
[
{
"created": "Tue, 18 Feb 2020 21:47:00 GMT",
"version": "v1"
}
] |
2020-02-20
|
[
[
"Wei",
"Yongyong",
""
],
[
"Zheng",
"Rong",
""
]
] |
Large-scale spatial data such as air quality, thermal conditions and location signatures play a vital role in a variety of applications. Collecting such data manually can be tedious and labour intensive. With the advancement of robotic technologies, it is feasible to automate such tasks using mobile robots with sensing and navigation capabilities. However, due to limited battery lifetime and scarcity of charging stations, it is important to plan paths for the robots that maximize the utility of data collection, also known as the informative path planning (IPP) problem. In this paper, we propose a novel IPP algorithm using reinforcement learning (RL). A constrained exploration and exploitation strategy is designed to address the unique challenges of IPP, and is shown to have fast convergence and better optimality than a classical reinforcement learning approach. Extensive experiments using real-world measurement data demonstrate that the proposed algorithm outperforms state-of-the-art algorithms in most test cases. Interestingly, unlike existing solutions that have to be re-executed when any input parameter changes, our RL-based solution allows a degree of transferability across different problem instances.
|
2009.08302
|
Pallavi Bagga
|
Pallavi Bagga, Nicola Paoletti and Kostas Stathis
|
Learnable Strategies for Bilateral Agent Negotiation over Multiple
Issues
| null | null | null | null |
cs.MA cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel bilateral negotiation model that allows a self-interested
agent to learn how to negotiate over multiple issues in the presence of user
preference uncertainty. The model relies upon interpretable strategy templates
representing the tactics the agent should employ during the negotiation and
learns template parameters to maximize the average utility received over
multiple negotiations, thus resulting in optimal bid acceptance and generation.
Our model also uses deep reinforcement learning to evaluate threshold utility
values, for those tactics that require them, thereby deriving optimal utilities
for every environment state. To handle user preference uncertainty, the model
relies on a stochastic search to find user model that best agrees with a given
partial preference profile. Multi-objective optimization and multi-criteria
decision-making methods are applied at negotiation time to generate
Pareto-optimal outcomes thereby increasing the number of successful (win-win)
negotiations. Rigorous experimental evaluations show that the agent employing
our model outperforms the winning agents of the 10th Automated Negotiating
Agents Competition (ANAC'19) in terms of individual as well as social-welfare
utilities.
|
[
{
"created": "Thu, 17 Sep 2020 13:52:18 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jan 2022 14:01:57 GMT",
"version": "v2"
}
] |
2022-01-10
|
[
[
"Bagga",
"Pallavi",
""
],
[
"Paoletti",
"Nicola",
""
],
[
"Stathis",
"Kostas",
""
]
] |
We present a novel bilateral negotiation model that allows a self-interested agent to learn how to negotiate over multiple issues in the presence of user preference uncertainty. The model relies upon interpretable strategy templates representing the tactics the agent should employ during the negotiation and learns template parameters to maximize the average utility received over multiple negotiations, thus resulting in optimal bid acceptance and generation. Our model also uses deep reinforcement learning to evaluate threshold utility values, for those tactics that require them, thereby deriving optimal utilities for every environment state. To handle user preference uncertainty, the model relies on a stochastic search to find user model that best agrees with a given partial preference profile. Multi-objective optimization and multi-criteria decision-making methods are applied at negotiation time to generate Pareto-optimal outcomes thereby increasing the number of successful (win-win) negotiations. Rigorous experimental evaluations show that the agent employing our model outperforms the winning agents of the 10th Automated Negotiating Agents Competition (ANAC'19) in terms of individual as well as social-welfare utilities.
|
1907.13070
|
Telma Pereira
|
Telma Pereira, Sofia Pires, Marta Gromicho, Susana Pinto, Mamede de
Carvalho, Sara C.Madeira
|
Predicting assisted ventilation in Amyotrophic Lateral Sclerosis using a
mixture of experts and conformal predictors
| null |
KDD 2019 Workshop on Applied Data Science for Healthcare
| null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease
characterized by a rapid motor decline, leading to respiratory failure and
subsequently to death. In this context, researchers have sought for models to
automatically predict disease progression to assisted ventilation in ALS
patients. However, the clinical translation of such models is limited by the
lack of insight 1) on the risk of error for predictions at patient-level, and
2) on the most adequate time to administer the non-invasive ventilation. To
address these issues, we combine Conformal Prediction (a machine learning
framework that complements predictions with confidence measures) and a mixture
experts into a prognostic model which not only predicts whether an ALS patient
will suffer from respiratory insufficiency but also the most likely time window
of occurrence, at a given reliability level. Promising results were obtained,
with near 80% of predictions being correctly identified.
|
[
{
"created": "Tue, 30 Jul 2019 16:55:29 GMT",
"version": "v1"
}
] |
2019-07-31
|
[
[
"Pereira",
"Telma",
""
],
[
"Pires",
"Sofia",
""
],
[
"Gromicho",
"Marta",
""
],
[
"Pinto",
"Susana",
""
],
[
"de Carvalho",
"Mamede",
""
],
[
"Madeira",
"Sara C.",
""
]
] |
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by a rapid motor decline, leading to respiratory failure and subsequently to death. In this context, researchers have sought for models to automatically predict disease progression to assisted ventilation in ALS patients. However, the clinical translation of such models is limited by the lack of insight 1) on the risk of error for predictions at patient-level, and 2) on the most adequate time to administer the non-invasive ventilation. To address these issues, we combine Conformal Prediction (a machine learning framework that complements predictions with confidence measures) and a mixture experts into a prognostic model which not only predicts whether an ALS patient will suffer from respiratory insufficiency but also the most likely time window of occurrence, at a given reliability level. Promising results were obtained, with near 80% of predictions being correctly identified.
|
2303.02389
|
Yuxuan Duan
|
Yuxuan Duan, Yan Hong, Li Niu, Liqing Zhang
|
Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation
|
Accepted by AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performances of defect inspection have been severely hindered by
insufficient defect images in industries, which can be alleviated by generating
more samples as data augmentation. We propose the first defect image generation
method in the challenging few-shot cases. Given just a handful of defect images
and relatively more defect-free ones, our goal is to augment the dataset with
new defect images. Our method consists of two training stages. First, we train
a data-efficient StyleGAN2 on defect-free images as the backbone. Second, we
attach defect-aware residual blocks to the backbone, which learn to produce
reasonable defect masks and accordingly manipulate the features within the
masked regions by training the added modules on limited defect images.
Extensive experiments on MVTec AD dataset not only validate the effectiveness
of our method in generating realistic and diverse defect images, but also
manifest the benefits it brings to downstream defect inspection tasks. Codes
are available at https://github.com/Ldhlwh/DFMGAN.
|
[
{
"created": "Sat, 4 Mar 2023 11:43:08 GMT",
"version": "v1"
}
] |
2023-03-07
|
[
[
"Duan",
"Yuxuan",
""
],
[
"Hong",
"Yan",
""
],
[
"Niu",
"Li",
""
],
[
"Zhang",
"Liqing",
""
]
] |
The performances of defect inspection have been severely hindered by insufficient defect images in industries, which can be alleviated by generating more samples as data augmentation. We propose the first defect image generation method in the challenging few-shot cases. Given just a handful of defect images and relatively more defect-free ones, our goal is to augment the dataset with new defect images. Our method consists of two training stages. First, we train a data-efficient StyleGAN2 on defect-free images as the backbone. Second, we attach defect-aware residual blocks to the backbone, which learn to produce reasonable defect masks and accordingly manipulate the features within the masked regions by training the added modules on limited defect images. Extensive experiments on MVTec AD dataset not only validate the effectiveness of our method in generating realistic and diverse defect images, but also manifest the benefits it brings to downstream defect inspection tasks. Codes are available at https://github.com/Ldhlwh/DFMGAN.
|
2202.12777
|
Cristiano Politowski
|
Cristiano Politowski, Yann-Ga\"el Gu\'eh\'eneuc, Fabio Petrillo
|
Towards Automated Video Game Testing: Still a Long Way to Go
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the complexity and scope of game development increase, playtesting remains
an essential activity to ensure the quality of video games. Yet, the manual,
ad-hoc nature of playtesting gives space to improvements in the process. In
this study, we investigate gaps between academic solutions in the literature
for automated video game testing and the needs of video game developers in the
industry. We performed a literature review on video game automated testing and
applied an online survey with video game developers. The literature results
show a rise in research topics related to automated video game testing. The
survey results show that game developers are skeptical about using automated
agents to test games. We conclude that there is a need for new testing
approaches that did not disrupt the developer workflow. As for the researchers,
the focus should be on the testing goal and testing oracle.
|
[
{
"created": "Fri, 25 Feb 2022 15:49:29 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Mar 2022 19:51:28 GMT",
"version": "v2"
}
] |
2022-03-14
|
[
[
"Politowski",
"Cristiano",
""
],
[
"Guéhéneuc",
"Yann-Gaël",
""
],
[
"Petrillo",
"Fabio",
""
]
] |
As the complexity and scope of game development increase, playtesting remains an essential activity to ensure the quality of video games. Yet, the manual, ad-hoc nature of playtesting gives space to improvements in the process. In this study, we investigate gaps between academic solutions in the literature for automated video game testing and the needs of video game developers in the industry. We performed a literature review on video game automated testing and applied an online survey with video game developers. The literature results show a rise in research topics related to automated video game testing. The survey results show that game developers are skeptical about using automated agents to test games. We conclude that there is a need for new testing approaches that did not disrupt the developer workflow. As for the researchers, the focus should be on the testing goal and testing oracle.
|
2111.08515
|
Kenneth Joseph
|
Kenneth Joseph, Benjamin D. Horne, Jon Green, John P. Wihbey
|
Local News Online and COVID in the U.S.: Relationships among Coverage,
Cases, Deaths, and Audience
|
Accepted, ICWSM'22
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We present analyses from a real-time information monitoring system of online
local news in the U.S. We study relationships among online local news coverage
of COVID, cases and deaths in an area, and properties of local news outlets and
their audiences. Our analysis relies on a unique dataset of the online content
of over 300 local news outlets, encompassing over 750,000 articles over a
period of 10 months spanning April 2020 to February 2021. We find that the rate
of COVID coverage over time by local news outlets was primarily associated with
death rates at the national level, but that this effect dissipated over the
course of the pandemic as news about COVID was steadily displaced by
sociopolitical events, like the 2020 U.S. elections. We also find that both the
volume and content of COVID coverage differed depending on local politics, and
outlet audience size, as well as evidence that more vulnerable populations
received less pandemic-related news.
|
[
{
"created": "Tue, 16 Nov 2021 14:37:23 GMT",
"version": "v1"
}
] |
2021-11-17
|
[
[
"Joseph",
"Kenneth",
""
],
[
"Horne",
"Benjamin D.",
""
],
[
"Green",
"Jon",
""
],
[
"Wihbey",
"John P.",
""
]
] |
We present analyses from a real-time information monitoring system of online local news in the U.S. We study relationships among online local news coverage of COVID, cases and deaths in an area, and properties of local news outlets and their audiences. Our analysis relies on a unique dataset of the online content of over 300 local news outlets, encompassing over 750,000 articles over a period of 10 months spanning April 2020 to February 2021. We find that the rate of COVID coverage over time by local news outlets was primarily associated with death rates at the national level, but that this effect dissipated over the course of the pandemic as news about COVID was steadily displaced by sociopolitical events, like the 2020 U.S. elections. We also find that both the volume and content of COVID coverage differed depending on local politics, and outlet audience size, as well as evidence that more vulnerable populations received less pandemic-related news.
|
2003.06423
|
Divyam Aggarwal
|
Divyam Aggarwal, Dhish Kumar Saxena, Thomas B\"ack, Michael Emmerich
|
On Initializing Airline Crew Pairing Optimization for Large-scale
Complex Flight Networks
|
17 pages, 9 figures, manuscript submitted for review in a refereed
journal
| null | null | null |
cs.AI math.CO math.OC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Crew pairing optimization (CPO) is critically important for any airline,
since its crew operating costs are second-largest, next to the fuel-cost. CPO
aims at generating a set of flight sequences (crew pairings) covering a
flight-schedule, at minimum-cost, while satisfying several legality
constraints. For large-scale complex flight networks, billion-plus legal
pairings (variables) are possible, rendering their offline enumeration
intractable and an exhaustive search for their minimum-cost full
flight-coverage subset impractical. Even generating an initial feasible
solution (IFS: a manageable set of legal pairings covering all flights), which
could be subsequently optimized is a difficult (NP-complete) problem. Though,
as part of a larger project the authors have developed a crew pairing optimizer
(AirCROP), this paper dedicatedly focuses on IFS-generation through a novel
heuristic based on divide-and-cover strategy and Integer Programming. For
real-world large and complex flight network datasets (including over 3200
flights and 15 crew bases) provided by GE Aviation, the proposed heuristic
shows upto a ten-fold speed improvement over another state-of-the-art approach.
Unprecedentedly, this paper presents an empirical investigation of the impact
of IFS-cost on the final (optimized) solution-cost, revealing that too low an
IFS-cost does not necessarily imply faster convergence for AirCROP or even
lower cost for the optimized solution.
|
[
{
"created": "Sun, 15 Mar 2020 08:21:38 GMT",
"version": "v1"
}
] |
2020-03-17
|
[
[
"Aggarwal",
"Divyam",
""
],
[
"Saxena",
"Dhish Kumar",
""
],
[
"Bäck",
"Thomas",
""
],
[
"Emmerich",
"Michael",
""
]
] |
Crew pairing optimization (CPO) is critically important for any airline, since its crew operating costs are second-largest, next to the fuel-cost. CPO aims at generating a set of flight sequences (crew pairings) covering a flight-schedule, at minimum-cost, while satisfying several legality constraints. For large-scale complex flight networks, billion-plus legal pairings (variables) are possible, rendering their offline enumeration intractable and an exhaustive search for their minimum-cost full flight-coverage subset impractical. Even generating an initial feasible solution (IFS: a manageable set of legal pairings covering all flights), which could be subsequently optimized is a difficult (NP-complete) problem. Though, as part of a larger project the authors have developed a crew pairing optimizer (AirCROP), this paper dedicatedly focuses on IFS-generation through a novel heuristic based on divide-and-cover strategy and Integer Programming. For real-world large and complex flight network datasets (including over 3200 flights and 15 crew bases) provided by GE Aviation, the proposed heuristic shows upto a ten-fold speed improvement over another state-of-the-art approach. Unprecedentedly, this paper presents an empirical investigation of the impact of IFS-cost on the final (optimized) solution-cost, revealing that too low an IFS-cost does not necessarily imply faster convergence for AirCROP or even lower cost for the optimized solution.
|
2106.10777
|
Mengyu Dai
|
Mengyu Dai and Haibin Hang
|
Manifold Matching via Deep Metric Learning for Generative Modeling
|
ICCV 2021. Code available at
https://github.com/dzld00/pytorch-manifold-matching.git
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a manifold matching approach to generative models which includes a
distribution generator (or data generator) and a metric generator. In our
framework, we view the real data set as some manifold embedded in a
high-dimensional Euclidean space. The distribution generator aims at generating
samples that follow some distribution condensed around the real data manifold.
It is achieved by matching two sets of points using their geometric shape
descriptors, such as centroid and $p$-diameter, with learned distance metric;
the metric generator utilizes both real data and generated samples to learn a
distance metric which is close to some intrinsic geodesic distance on the real
data manifold. The produced distance metric is further used for manifold
matching. The two networks are learned simultaneously during the training
process. We apply the approach on both unsupervised and supervised learning
tasks: in unconditional image generation task, the proposed method obtains
competitive results compared with existing generative models; in
super-resolution task, we incorporate the framework in perception-based models
and improve visual qualities by producing samples with more natural textures.
Experiments and analysis demonstrate the feasibility and effectiveness of the
proposed framework.
|
[
{
"created": "Sun, 20 Jun 2021 23:25:01 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jul 2021 03:36:54 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Aug 2021 23:00:22 GMT",
"version": "v3"
}
] |
2021-08-30
|
[
[
"Dai",
"Mengyu",
""
],
[
"Hang",
"Haibin",
""
]
] |
We propose a manifold matching approach to generative models which includes a distribution generator (or data generator) and a metric generator. In our framework, we view the real data set as some manifold embedded in a high-dimensional Euclidean space. The distribution generator aims at generating samples that follow some distribution condensed around the real data manifold. It is achieved by matching two sets of points using their geometric shape descriptors, such as centroid and $p$-diameter, with learned distance metric; the metric generator utilizes both real data and generated samples to learn a distance metric which is close to some intrinsic geodesic distance on the real data manifold. The produced distance metric is further used for manifold matching. The two networks are learned simultaneously during the training process. We apply the approach on both unsupervised and supervised learning tasks: in unconditional image generation task, the proposed method obtains competitive results compared with existing generative models; in super-resolution task, we incorporate the framework in perception-based models and improve visual qualities by producing samples with more natural textures. Experiments and analysis demonstrate the feasibility and effectiveness of the proposed framework.
|
1908.03190
|
Hayden Schaeffer
|
Yifan Sun, Linan Zhang, and Hayden Schaeffer
|
NeuPDE: Neural Network Based Ordinary and Partial Differential Equations
for Modeling Time-Dependent Data
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a neural network based approach for extracting models from dynamic
data using ordinary and partial differential equations. In particular, given a
time-series or spatio-temporal dataset, we seek to identify an accurate
governing system which respects the intrinsic differential structure. The
unknown governing model is parameterized by using both (shallow) multilayer
perceptrons and nonlinear differential terms, in order to incorporate relevant
correlations between spatio-temporal samples. We demonstrate the approach on
several examples where the data is sampled from various dynamical systems and
give a comparison to recurrent networks and other data-discovery methods. In
addition, we show that for MNIST and Fashion MNIST, our approach lowers the
parameter cost as compared to other deep neural networks.
|
[
{
"created": "Thu, 8 Aug 2019 17:50:22 GMT",
"version": "v1"
}
] |
2019-08-09
|
[
[
"Sun",
"Yifan",
""
],
[
"Zhang",
"Linan",
""
],
[
"Schaeffer",
"Hayden",
""
]
] |
We propose a neural network based approach for extracting models from dynamic data using ordinary and partial differential equations. In particular, given a time-series or spatio-temporal dataset, we seek to identify an accurate governing system which respects the intrinsic differential structure. The unknown governing model is parameterized by using both (shallow) multilayer perceptrons and nonlinear differential terms, in order to incorporate relevant correlations between spatio-temporal samples. We demonstrate the approach on several examples where the data is sampled from various dynamical systems and give a comparison to recurrent networks and other data-discovery methods. In addition, we show that for MNIST and Fashion MNIST, our approach lowers the parameter cost as compared to other deep neural networks.
|
2012.09852
|
Hanrui Wang
|
Hanrui Wang and Zhekai Zhang and Song Han
|
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and
Head Pruning
|
Published as a conference paper in HPCA 2021; 15 pages, 23 figures
| null |
10.1109/HPCA51647.2021.00018
| null |
cs.AR cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The attention mechanism is becoming increasingly popular in Natural Language
Processing (NLP) applications, showing superior performance than convolutional
and recurrent architectures. However, attention becomes the compution
bottleneck because of its quadratic computational complexity to input length,
complicated data movement and low arithmetic intensity. Moreover, existing NN
accelerators mainly focus on optimizing convolutional or recurrent models, and
cannot efficiently support attention. In this paper, we present SpAtten, an
efficient algorithm-architecture co-design that leverages token sparsity, head
sparsity, and quantization opportunities to reduce the attention computation
and memory access. Inspired by the high redundancy of human languages, we
propose the novel cascade token pruning to prune away unimportant tokens in the
sentence. We also propose cascade head pruning to remove unessential heads.
Cascade pruning is fundamentally different from weight pruning since there is
no trainable weight in the attention mechanism, and the pruned tokens and heads
are selected on the fly. To efficiently support them on hardware, we design a
novel top-k engine to rank token and head importance scores with high
throughput. Furthermore, we propose progressive quantization that first fetches
MSBs only and performs the computation; if the confidence is low, it fetches
LSBs and recomputes the attention outputs, trading computation for memory
reduction.
Extensive experiments on 30 benchmarks show that, on average, SpAtten reduces
DRAM access by 10.0x with no accuracy loss, and achieves 1.6x, 3.0x, 162x, 347x
speedup, and 1,4x, 3.2x, 1193x, 4059x energy savings over A3 accelerator,
MNNFast accelerator, TITAN Xp GPU, Xeon CPU, respectively.
|
[
{
"created": "Thu, 17 Dec 2020 18:59:07 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jan 2021 03:49:57 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Jul 2024 18:48:38 GMT",
"version": "v3"
}
] |
2024-07-22
|
[
[
"Wang",
"Hanrui",
""
],
[
"Zhang",
"Zhekai",
""
],
[
"Han",
"Song",
""
]
] |
The attention mechanism is becoming increasingly popular in Natural Language Processing (NLP) applications, showing superior performance than convolutional and recurrent architectures. However, attention becomes the compution bottleneck because of its quadratic computational complexity to input length, complicated data movement and low arithmetic intensity. Moreover, existing NN accelerators mainly focus on optimizing convolutional or recurrent models, and cannot efficiently support attention. In this paper, we present SpAtten, an efficient algorithm-architecture co-design that leverages token sparsity, head sparsity, and quantization opportunities to reduce the attention computation and memory access. Inspired by the high redundancy of human languages, we propose the novel cascade token pruning to prune away unimportant tokens in the sentence. We also propose cascade head pruning to remove unessential heads. Cascade pruning is fundamentally different from weight pruning since there is no trainable weight in the attention mechanism, and the pruned tokens and heads are selected on the fly. To efficiently support them on hardware, we design a novel top-k engine to rank token and head importance scores with high throughput. Furthermore, we propose progressive quantization that first fetches MSBs only and performs the computation; if the confidence is low, it fetches LSBs and recomputes the attention outputs, trading computation for memory reduction. Extensive experiments on 30 benchmarks show that, on average, SpAtten reduces DRAM access by 10.0x with no accuracy loss, and achieves 1.6x, 3.0x, 162x, 347x speedup, and 1,4x, 3.2x, 1193x, 4059x energy savings over A3 accelerator, MNNFast accelerator, TITAN Xp GPU, Xeon CPU, respectively.
|
2407.17399
|
S\'ebastien Herbreteau
|
S\'ebastien Herbreteau and Michael Unser
|
Self-Calibrated Variance-Stabilizing Transformations for Real-World
Image Denoising
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Supervised deep learning has become the method of choice for image denoising.
It involves the training of neural networks on large datasets composed of pairs
of noisy and clean images. However, the necessity of training data that are
specific to the targeted application constrains the widespread use of denoising
networks. Recently, several approaches have been developed to overcome this
difficulty by whether artificially generating realistic clean/noisy image
pairs, or training exclusively on noisy images. In this paper, we show that,
contrary to popular belief, denoising networks specialized in the removal of
Gaussian noise can be efficiently leveraged in favor of real-world image
denoising, even without additional training. For this to happen, an appropriate
variance-stabilizing transform (VST) has to be applied beforehand. We propose
an algorithm termed Noise2VST for the learning of such a model-free VST. Our
approach requires only the input noisy image and an off-the-shelf Gaussian
denoiser. We demonstrate through extensive experiments the efficiency and
superiority of Noise2VST in comparison to existing methods trained in the
absence of specific clean/noisy pairs.
|
[
{
"created": "Wed, 24 Jul 2024 16:23:46 GMT",
"version": "v1"
}
] |
2024-07-25
|
[
[
"Herbreteau",
"Sébastien",
""
],
[
"Unser",
"Michael",
""
]
] |
Supervised deep learning has become the method of choice for image denoising. It involves the training of neural networks on large datasets composed of pairs of noisy and clean images. However, the necessity of training data that are specific to the targeted application constrains the widespread use of denoising networks. Recently, several approaches have been developed to overcome this difficulty by whether artificially generating realistic clean/noisy image pairs, or training exclusively on noisy images. In this paper, we show that, contrary to popular belief, denoising networks specialized in the removal of Gaussian noise can be efficiently leveraged in favor of real-world image denoising, even without additional training. For this to happen, an appropriate variance-stabilizing transform (VST) has to be applied beforehand. We propose an algorithm termed Noise2VST for the learning of such a model-free VST. Our approach requires only the input noisy image and an off-the-shelf Gaussian denoiser. We demonstrate through extensive experiments the efficiency and superiority of Noise2VST in comparison to existing methods trained in the absence of specific clean/noisy pairs.
|
2405.01790
|
Olubusayo Olabisi
|
Olubusayo Olabisi and Ameeta Agrawal
|
Understanding Position Bias Effects on Fairness in Social Multi-Document
Summarization
|
Accepted at VarDial 2024
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Text summarization models have typically focused on optimizing aspects of
quality such as fluency, relevance, and coherence, particularly in the context
of news articles. However, summarization models are increasingly being used to
summarize diverse sources of text, such as social media data, that encompass a
wide demographic user base. It is thus crucial to assess not only the quality
of the generated summaries, but also the extent to which they can fairly
represent the opinions of diverse social groups. Position bias, a long-known
issue in news summarization, has received limited attention in the context of
social multi-document summarization. We deeply investigate this phenomenon by
analyzing the effect of group ordering in input documents when summarizing
tweets from three distinct linguistic communities: African-American English,
Hispanic-aligned Language, and White-aligned Language. Our empirical analysis
shows that although the textual quality of the summaries remains consistent
regardless of the input document order, in terms of fairness, the results vary
significantly depending on how the dialect groups are presented in the input
data. Our results suggest that position bias manifests differently in social
multi-document summarization, severely impacting the fairness of summarization
models.
|
[
{
"created": "Fri, 3 May 2024 00:19:31 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Olabisi",
"Olubusayo",
""
],
[
"Agrawal",
"Ameeta",
""
]
] |
Text summarization models have typically focused on optimizing aspects of quality such as fluency, relevance, and coherence, particularly in the context of news articles. However, summarization models are increasingly being used to summarize diverse sources of text, such as social media data, that encompass a wide demographic user base. It is thus crucial to assess not only the quality of the generated summaries, but also the extent to which they can fairly represent the opinions of diverse social groups. Position bias, a long-known issue in news summarization, has received limited attention in the context of social multi-document summarization. We deeply investigate this phenomenon by analyzing the effect of group ordering in input documents when summarizing tweets from three distinct linguistic communities: African-American English, Hispanic-aligned Language, and White-aligned Language. Our empirical analysis shows that although the textual quality of the summaries remains consistent regardless of the input document order, in terms of fairness, the results vary significantly depending on how the dialect groups are presented in the input data. Our results suggest that position bias manifests differently in social multi-document summarization, severely impacting the fairness of summarization models.
|
2210.00563
|
Julia Balla
|
Julia Balla, Sihao Huang, Owen Dugan, Rumen Dangovski, Marin Soljacic
|
AI-Assisted Discovery of Quantitative and Formal Models in Social
Science
|
19 pages, 4 figures
| null | null | null |
cs.SC cs.LG econ.EM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In social science, formal and quantitative models, such as ones describing
economic growth and collective action, are used to formulate mechanistic
explanations, provide predictions, and uncover questions about observed
phenomena. Here, we demonstrate the use of a machine learning system to aid the
discovery of symbolic models that capture nonlinear and dynamical relationships
in social science datasets. By extending neuro-symbolic methods to find compact
functions and differential equations in noisy and longitudinal data, we show
that our system can be used to discover interpretable models from real-world
data in economics and sociology. Augmenting existing workflows with symbolic
regression can help uncover novel relationships and explore counterfactual
models during the scientific process. We propose that this AI-assisted
framework can bridge parametric and non-parametric models commonly employed in
social science research by systematically exploring the space of nonlinear
models and enabling fine-grained control over expressivity and
interpretability.
|
[
{
"created": "Sun, 2 Oct 2022 16:25:47 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2022 13:29:26 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Aug 2023 17:45:13 GMT",
"version": "v3"
}
] |
2023-08-17
|
[
[
"Balla",
"Julia",
""
],
[
"Huang",
"Sihao",
""
],
[
"Dugan",
"Owen",
""
],
[
"Dangovski",
"Rumen",
""
],
[
"Soljacic",
"Marin",
""
]
] |
In social science, formal and quantitative models, such as ones describing economic growth and collective action, are used to formulate mechanistic explanations, provide predictions, and uncover questions about observed phenomena. Here, we demonstrate the use of a machine learning system to aid the discovery of symbolic models that capture nonlinear and dynamical relationships in social science datasets. By extending neuro-symbolic methods to find compact functions and differential equations in noisy and longitudinal data, we show that our system can be used to discover interpretable models from real-world data in economics and sociology. Augmenting existing workflows with symbolic regression can help uncover novel relationships and explore counterfactual models during the scientific process. We propose that this AI-assisted framework can bridge parametric and non-parametric models commonly employed in social science research by systematically exploring the space of nonlinear models and enabling fine-grained control over expressivity and interpretability.
|
1611.02112
|
Emanuel Kieronski
|
Bartosz Bednarczyk, Witold Charatonik and Emanuel Kiero\'nski
|
Extending Two-Variable Logic on Trees
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The finite satisfiability problem for the two-variable fragment of
first-order logic interpreted over trees was recently shown to be
ExpSpace-complete. We consider two extensions of this logic. We show that
adding either additional binary symbols or counting quantifiers to the logic
does not affect the complexity of the finite satisfiability problem. However,
combining the two extensions and adding both binary symbols and counting
quantifiers leads to an explosion of this complexity.
We also compare the expressive power of the two-variable fragment over trees
with its extension with counting quantifiers. It turns out that the two logics
are equally expressive, although counting quantifiers do add expressive power
in the restricted case of unordered trees.
|
[
{
"created": "Mon, 7 Nov 2016 15:30:35 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Nov 2016 12:37:20 GMT",
"version": "v2"
}
] |
2016-11-28
|
[
[
"Bednarczyk",
"Bartosz",
""
],
[
"Charatonik",
"Witold",
""
],
[
"Kieroński",
"Emanuel",
""
]
] |
The finite satisfiability problem for the two-variable fragment of first-order logic interpreted over trees was recently shown to be ExpSpace-complete. We consider two extensions of this logic. We show that adding either additional binary symbols or counting quantifiers to the logic does not affect the complexity of the finite satisfiability problem. However, combining the two extensions and adding both binary symbols and counting quantifiers leads to an explosion of this complexity. We also compare the expressive power of the two-variable fragment over trees with its extension with counting quantifiers. It turns out that the two logics are equally expressive, although counting quantifiers do add expressive power in the restricted case of unordered trees.
|
2011.06404
|
Rohit Chadha
|
Gilles Barthe and Rohit Chadha and Paul Krogmeier and A. Prasad Sistla
and Mahesh Viswanathan
|
Deciding Accuracy of Differential Privacy Schemes
| null | null |
10.1145/3434289
| null |
cs.CR cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Differential privacy is a mathematical framework for developing statistical
computations with provable guarantees of privacy and accuracy. In contrast to
the privacy component of differential privacy, which has a clear mathematical
and intuitive meaning, the accuracy component of differential privacy does not
have a generally accepted definition; accuracy claims of differential privacy
algorithms vary from algorithm to algorithm and are not instantiations of a
general definition. We identify program discontinuity as a common theme in
existing \emph{ad hoc} definitions and introduce an alternative notion of
accuracy parametrized by, what we call, {\distance} -- the {\distance} of an
input $x$ w.r.t., a deterministic computation $f$ and a distance $d$, is the
minimal distance $d(x,y)$ over all $y$ such that $f(y)\neq f(x)$. We show that
our notion of accuracy subsumes the definition used in theoretical computer
science, and captures known accuracy claims for differential privacy
algorithms. In fact, our general notion of accuracy helps us prove better
claims in some cases. Next, we study the decidability of accuracy. We first
show that accuracy is in general undecidable. Then, we define a non-trivial
class of probabilistic computations for which accuracy is decidable
(unconditionally, or assuming Schanuel's conjecture). We implement our decision
procedure and experimentally evaluate the effectiveness of our approach for
generating proofs or counterexamples of accuracy for common algorithms from the
literature.
|
[
{
"created": "Thu, 12 Nov 2020 14:17:51 GMT",
"version": "v1"
}
] |
2020-11-13
|
[
[
"Barthe",
"Gilles",
""
],
[
"Chadha",
"Rohit",
""
],
[
"Krogmeier",
"Paul",
""
],
[
"Sistla",
"A. Prasad",
""
],
[
"Viswanathan",
"Mahesh",
""
]
] |
Differential privacy is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of differential privacy, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a generally accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. We identify program discontinuity as a common theme in existing \emph{ad hoc} definitions and introduce an alternative notion of accuracy parametrized by, what we call, {\distance} -- the {\distance} of an input $x$ w.r.t., a deterministic computation $f$ and a distance $d$, is the minimal distance $d(x,y)$ over all $y$ such that $f(y)\neq f(x)$. We show that our notion of accuracy subsumes the definition used in theoretical computer science, and captures known accuracy claims for differential privacy algorithms. In fact, our general notion of accuracy helps us prove better claims in some cases. Next, we study the decidability of accuracy. We first show that accuracy is in general undecidable. Then, we define a non-trivial class of probabilistic computations for which accuracy is decidable (unconditionally, or assuming Schanuel's conjecture). We implement our decision procedure and experimentally evaluate the effectiveness of our approach for generating proofs or counterexamples of accuracy for common algorithms from the literature.
|
1610.05579
|
Fadoua Hassen
|
Fadoua Hassen and Lotfi Mhamdi
|
A Scalable Multi-Stage Packet-Switch for Data Center Networks
|
15 pages, 20 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing trends of data centers over last decades including social
networking, cloud-based applications and storage technologies enabled many
advances to take place in the networking area. Recent changes imply continuous
demand for bandwidth to manage the large amount of packetized traffic. Cluster
switches and routers make the switching fabric in a Data Center Network (DCN)
environment and provide interconnectivity between elements of the same DC and
inter DCs. To handle the constantly variable loads, switches need deliver
outstanding throughput along with resiliency and scalability for DCN
requirements. Conventional DCN switches adopt crossbars or/and blocks of
memories mounted in a multistage fashion (commonly 2-Tiers or 3-Tiers).
However, current multistage switches, with their space-memory variants, are
either too complex to implement, have poor performance, or not cost effective.
We propose a novel and highly scalable multistage switch based on
Networks-on-Chip (NoC) fabrics for DCNs. In particular, we describe a
three-stage Clos packet-switch with a Round Robin packets dispatching scheme
where each central stage module is based on a Unidirectional NoC (UDN), instead
of the conventional single-hop crossbar. The design, referred to as Clos-UDN,
overcomes shortcomings of traditional multistage architectures as it (i)
Obviates the need for a complex and costly input modules, by means of few, yet
simple, input FIFO queues. (ii) Avoids the need for a complex and synchronized
scheduling process over a high number of input-output modules and/or port
pairs. (iii) Provides speedup, load balancing and path-diversity thanks to a
dynamic dispatching scheme as well as the NoC based fabric nature. Simulations
show that the Clos-UDN outperforms some common multistage switches under a
range of input traffics, making it highly appealing for ultra-high capacity DC
networks.
|
[
{
"created": "Sat, 24 Sep 2016 12:12:53 GMT",
"version": "v1"
}
] |
2016-10-19
|
[
[
"Hassen",
"Fadoua",
""
],
[
"Mhamdi",
"Lotfi",
""
]
] |
The growing trends of data centers over last decades including social networking, cloud-based applications and storage technologies enabled many advances to take place in the networking area. Recent changes imply continuous demand for bandwidth to manage the large amount of packetized traffic. Cluster switches and routers make the switching fabric in a Data Center Network (DCN) environment and provide interconnectivity between elements of the same DC and inter DCs. To handle the constantly variable loads, switches need deliver outstanding throughput along with resiliency and scalability for DCN requirements. Conventional DCN switches adopt crossbars or/and blocks of memories mounted in a multistage fashion (commonly 2-Tiers or 3-Tiers). However, current multistage switches, with their space-memory variants, are either too complex to implement, have poor performance, or not cost effective. We propose a novel and highly scalable multistage switch based on Networks-on-Chip (NoC) fabrics for DCNs. In particular, we describe a three-stage Clos packet-switch with a Round Robin packets dispatching scheme where each central stage module is based on a Unidirectional NoC (UDN), instead of the conventional single-hop crossbar. The design, referred to as Clos-UDN, overcomes shortcomings of traditional multistage architectures as it (i) Obviates the need for a complex and costly input modules, by means of few, yet simple, input FIFO queues. (ii) Avoids the need for a complex and synchronized scheduling process over a high number of input-output modules and/or port pairs. (iii) Provides speedup, load balancing and path-diversity thanks to a dynamic dispatching scheme as well as the NoC based fabric nature. Simulations show that the Clos-UDN outperforms some common multistage switches under a range of input traffics, making it highly appealing for ultra-high capacity DC networks.
|
2012.02029
|
Swati Padhee
|
Swati Padhee, Anurag Illendula, Megan Sadler, Valerie L.Shalin, Tanvi
Banerjee, Krishnaprasad Thirunarayan, William L. Romine
|
Predicting Early Indicators of Cognitive Decline from Verbal Utterances
|
Camera-ready paper accepted for publication at IEEE BIBM 2020
| null | null |
IEEE BIBM 2020 paper ID B686
|
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dementia is a group of irreversible, chronic, and progressive
neurodegenerative disorders resulting in impaired memory, communication, and
thought processes. In recent years, clinical research advances in brain aging
have focused on the earliest clinically detectable stage of incipient dementia,
commonly known as mild cognitive impairment (MCI). Currently, these disorders
are diagnosed using a manual analysis of neuropsychological examinations. We
measure the feasibility of using the linguistic characteristics of verbal
utterances elicited during neuropsychological exams of elderly subjects to
distinguish between elderly control groups, people with MCI, people diagnosed
with possible Alzheimer's disease (AD), and probable AD. We investigated the
performance of both theory-driven psycholinguistic features and data-driven
contextual language embeddings in identifying different clinically diagnosed
groups. Our experiments show that a combination of contextual and
psycholinguistic features extracted by a Support Vector Machine improved
distinguishing the verbal utterances of elderly controls, people with MCI,
possible AD, and probable AD. This is the first work to identify four clinical
diagnosis groups of dementia in a highly imbalanced dataset. Our work shows
that machine learning algorithms built on contextual and psycholinguistic
features can learn the linguistic biomarkers from verbal utterances and assist
clinical diagnosis of different stages and types of dementia, even with limited
data.
|
[
{
"created": "Thu, 19 Nov 2020 02:24:11 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2021 14:42:59 GMT",
"version": "v2"
}
] |
2021-02-25
|
[
[
"Padhee",
"Swati",
""
],
[
"Illendula",
"Anurag",
""
],
[
"Sadler",
"Megan",
""
],
[
"Shalin",
"Valerie L.",
""
],
[
"Banerjee",
"Tanvi",
""
],
[
"Thirunarayan",
"Krishnaprasad",
""
],
[
"Romine",
"William L.",
""
]
] |
Dementia is a group of irreversible, chronic, and progressive neurodegenerative disorders resulting in impaired memory, communication, and thought processes. In recent years, clinical research advances in brain aging have focused on the earliest clinically detectable stage of incipient dementia, commonly known as mild cognitive impairment (MCI). Currently, these disorders are diagnosed using a manual analysis of neuropsychological examinations. We measure the feasibility of using the linguistic characteristics of verbal utterances elicited during neuropsychological exams of elderly subjects to distinguish between elderly control groups, people with MCI, people diagnosed with possible Alzheimer's disease (AD), and probable AD. We investigated the performance of both theory-driven psycholinguistic features and data-driven contextual language embeddings in identifying different clinically diagnosed groups. Our experiments show that a combination of contextual and psycholinguistic features extracted by a Support Vector Machine improved distinguishing the verbal utterances of elderly controls, people with MCI, possible AD, and probable AD. This is the first work to identify four clinical diagnosis groups of dementia in a highly imbalanced dataset. Our work shows that machine learning algorithms built on contextual and psycholinguistic features can learn the linguistic biomarkers from verbal utterances and assist clinical diagnosis of different stages and types of dementia, even with limited data.
|
2407.03898
|
Shunqi Huang
|
Shunqi Huang, Lei Liu, and Brian M. Kurkoski
|
Overflow-Avoiding Memory AMP
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate Message Passing (AMP) type algorithms are widely used for signal
recovery in high-dimensional noisy linear systems. Recently, a principle called
Memory AMP (MAMP) was proposed. Leveraging this principle, the gradient descent
MAMP (GD-MAMP) algorithm was designed, inheriting the strengths of AMP and
OAMP/VAMP. In this paper, we first provide an overflow-avoiding GD-MAMP
(OA-GD-MAMP) to address the overflow problem that arises from some intermediate
variables exceeding the range of floating point numbers. Second, we develop a
complexity-reduced GD-MAMP (CR-GD-MAMP) to reduce the number of matrix-vector
products per iteration by 1/3 (from 3 to 2) with little to no impact on the
convergence speed.
|
[
{
"created": "Thu, 4 Jul 2024 12:44:03 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Huang",
"Shunqi",
""
],
[
"Liu",
"Lei",
""
],
[
"Kurkoski",
"Brian M.",
""
]
] |
Approximate Message Passing (AMP) type algorithms are widely used for signal recovery in high-dimensional noisy linear systems. Recently, a principle called Memory AMP (MAMP) was proposed. Leveraging this principle, the gradient descent MAMP (GD-MAMP) algorithm was designed, inheriting the strengths of AMP and OAMP/VAMP. In this paper, we first provide an overflow-avoiding GD-MAMP (OA-GD-MAMP) to address the overflow problem that arises from some intermediate variables exceeding the range of floating point numbers. Second, we develop a complexity-reduced GD-MAMP (CR-GD-MAMP) to reduce the number of matrix-vector products per iteration by 1/3 (from 3 to 2) with little to no impact on the convergence speed.
|
0904.3093
|
Petteri Kaski
|
Andreas Bj\"orklund and Thore Husfeldt and Petteri Kaski and Mikko
Koivisto
|
Counting Paths and Packings in Halves
| null | null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is shown that one can count $k$-edge paths in an $n$-vertex graph and
$m$-set $k$-packings on an $n$-element universe, respectively, in time ${n
\choose k/2}$ and ${n \choose mk/2}$, up to a factor polynomial in $n$, $k$,
and $m$; in polynomial space, the bounds hold if multiplied by $3^{k/2}$ or
$5^{mk/2}$, respectively. These are implications of a more general result:
given two set families on an $n$-element universe, one can count the disjoint
pairs of sets in the Cartesian product of the two families with $\nO(n \ell)$
basic operations, where $\ell$ is the number of members in the two families and
their subsets.
|
[
{
"created": "Mon, 20 Apr 2009 19:46:39 GMT",
"version": "v1"
}
] |
2009-04-21
|
[
[
"Björklund",
"Andreas",
""
],
[
"Husfeldt",
"Thore",
""
],
[
"Kaski",
"Petteri",
""
],
[
"Koivisto",
"Mikko",
""
]
] |
It is shown that one can count $k$-edge paths in an $n$-vertex graph and $m$-set $k$-packings on an $n$-element universe, respectively, in time ${n \choose k/2}$ and ${n \choose mk/2}$, up to a factor polynomial in $n$, $k$, and $m$; in polynomial space, the bounds hold if multiplied by $3^{k/2}$ or $5^{mk/2}$, respectively. These are implications of a more general result: given two set families on an $n$-element universe, one can count the disjoint pairs of sets in the Cartesian product of the two families with $\nO(n \ell)$ basic operations, where $\ell$ is the number of members in the two families and their subsets.
|
2212.12686
|
B.Sundar Rajan
|
K. K. Krishnan Namboodiri and B. Sundar Rajan
|
Combinatorial Multi-Access Coded Caching: Improved Rate-Memory Trade-off
with Coded Placement
|
To appear in IEEE Transactions on Information Theory. Some optimality
gap results included. 19 pages, 2 tables and 5 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work considers the combinatorial multi-access coded caching problem
introduced in the recent work by Muralidhar \textit{et al.} [P. N. Muralidhar,
D. Katyal, and B. S. Rajan, ``Maddah-Ali-Niesen scheme for multi-access coded
caching,'' in \textit{IEEE Inf. Theory Workshop (ITW)}, 2021] The problem
setting consists of a central server having a library of $N$ files and $C$
caches each with capacity $M$. Each user in the system can access a unique set
of $r<C$ caches, and there exist users corresponding to every distinct set of
$r$ caches. Therefore, the number of users in the system is $\binom{C}{r}$. For
the aforementioned combinatorial multi-access setting, we propose a coded
caching scheme with an MDS code-based coded placement. This novel placement
technique helps to achieve a better rate in the delivery phase compared to the
optimal scheme under uncoded placement when $M> N/C$. For a lower memory
regime, we present another scheme with coded placement, which outperforms the
optimal scheme under uncoded placement if the number of files is no more than
the number of users. Further, we derive an information-theoretic lower bound on
the optimal rate-memory trade-off of the combinatorial multi-access coded
caching scheme. In addition, using the derived lower bound, we show that the
first scheme is optimal in the higher memory regime, and the second scheme is
optimal if $N\leq \binom{C}{r}$. Finally, we show that the performance of the
first scheme is within a constant factor of the optimal performance, when
$r=2$.
|
[
{
"created": "Sat, 24 Dec 2022 08:36:05 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jan 2024 05:59:01 GMT",
"version": "v2"
}
] |
2024-01-18
|
[
[
"Namboodiri",
"K. K. Krishnan",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
This work considers the combinatorial multi-access coded caching problem introduced in the recent work by Muralidhar \textit{et al.} [P. N. Muralidhar, D. Katyal, and B. S. Rajan, ``Maddah-Ali-Niesen scheme for multi-access coded caching,'' in \textit{IEEE Inf. Theory Workshop (ITW)}, 2021] The problem setting consists of a central server having a library of $N$ files and $C$ caches each with capacity $M$. Each user in the system can access a unique set of $r<C$ caches, and there exist users corresponding to every distinct set of $r$ caches. Therefore, the number of users in the system is $\binom{C}{r}$. For the aforementioned combinatorial multi-access setting, we propose a coded caching scheme with an MDS code-based coded placement. This novel placement technique helps to achieve a better rate in the delivery phase compared to the optimal scheme under uncoded placement when $M> N/C$. For a lower memory regime, we present another scheme with coded placement, which outperforms the optimal scheme under uncoded placement if the number of files is no more than the number of users. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial multi-access coded caching scheme. In addition, using the derived lower bound, we show that the first scheme is optimal in the higher memory regime, and the second scheme is optimal if $N\leq \binom{C}{r}$. Finally, we show that the performance of the first scheme is within a constant factor of the optimal performance, when $r=2$.
|
2407.00524
|
Artur Janicki
|
Mateusz Brzozowski and Artur Janicki
|
Real-Time Energy Measurement for Non-Intrusive Well-Being Monitoring of
Elderly People -- a Case Study
|
6 pages, 4 figures
| null | null | null |
cs.CY cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a case study demonstrating a non-intrusive method for
the well-being monitoring of elderly people. It is based on our real-time
energy measurement system, which uses tiny beacons attached to electricity
meters. Four participants aged 67-82 years took part in our study. We observed
their electric power consumption for approx. a month, and then we analyzed
them, taking into account the participants' notes on their activities. We
created typical daily usage profiles for each participant and used anomaly
detection to find unusual energy consumption. We found out that real-time
energy measurement can give significant insight into someone's daily activities
and, consequently, bring invaluable information to caregivers about the
well-being of an elderly person, while being discreet and entirely
non-intrusive.
|
[
{
"created": "Sat, 29 Jun 2024 20:03:50 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Brzozowski",
"Mateusz",
""
],
[
"Janicki",
"Artur",
""
]
] |
This article presents a case study demonstrating a non-intrusive method for the well-being monitoring of elderly people. It is based on our real-time energy measurement system, which uses tiny beacons attached to electricity meters. Four participants aged 67-82 years took part in our study. We observed their electric power consumption for approx. a month, and then we analyzed them, taking into account the participants' notes on their activities. We created typical daily usage profiles for each participant and used anomaly detection to find unusual energy consumption. We found out that real-time energy measurement can give significant insight into someone's daily activities and, consequently, bring invaluable information to caregivers about the well-being of an elderly person, while being discreet and entirely non-intrusive.
|
1204.5652
|
Rakshith Rajashekar
|
Rakshith Rajashekar and K. V. S. Hari
|
ML Decoding Complexity Reduction in STBCs Using Time-Orthogonal Pulse
Shaping
|
10 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the recent developments in the Space Shift Keying (SSK) and
Spatial Modulation (SM) systems which employ Time-Orthogonal Pulse Shaping
(TOPS) filters to achieve transmit diversity gains, we propose TOPS for
Space-Time Block Codes (STBC). We show that any STBC whose set of weight
matrices partitions into P subsets under the equivalence relation termed as
Common Support Relation can be made P -group decodable by properly employing
TOPS waveforms across space and time. Furthermore, by considering some of the
well known STBCs in the literature we show that the order of their Maximum
Likelihood decoding complexity can be greatly reduced by the application of
TOPS.
|
[
{
"created": "Wed, 25 Apr 2012 13:16:06 GMT",
"version": "v1"
}
] |
2012-04-26
|
[
[
"Rajashekar",
"Rakshith",
""
],
[
"Hari",
"K. V. S.",
""
]
] |
Motivated by the recent developments in the Space Shift Keying (SSK) and Spatial Modulation (SM) systems which employ Time-Orthogonal Pulse Shaping (TOPS) filters to achieve transmit diversity gains, we propose TOPS for Space-Time Block Codes (STBC). We show that any STBC whose set of weight matrices partitions into P subsets under the equivalence relation termed as Common Support Relation can be made P -group decodable by properly employing TOPS waveforms across space and time. Furthermore, by considering some of the well known STBCs in the literature we show that the order of their Maximum Likelihood decoding complexity can be greatly reduced by the application of TOPS.
|
2403.17331
|
Hao Wang
|
Ashish Bastola, Hao Wang, Xiwen Chen, Abolfazl Razi
|
FedMIL: Federated-Multiple Instance Learning for Video Analysis with
Optimized DPP Scheduling
| null | null | null | null |
cs.DC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many AI platforms, including traffic monitoring systems, use Federated
Learning (FL) for decentralized sensor data processing for learning-based
applications while preserving privacy and ensuring secured information
transfer. On the other hand, applying supervised learning to large data
samples, like high-resolution images requires intensive human labor to label
different parts of a data sample. Multiple Instance Learning (MIL) alleviates
this challenge by operating over labels assigned to the 'bag' of instances. In
this paper, we introduce Federated Multiple-Instance Learning (FedMIL). This
framework applies federated learning to boost the training performance in
video-based MIL tasks such as vehicle accident detection using distributed CCTV
networks. However, data sources in decentralized settings are not typically
Independently and Identically Distributed (IID), making client selection
imperative to collectively represent the entire dataset with minimal clients.
To address this challenge, we propose DPPQ, a framework based on the
Determinantal Point Process (DPP) with a quality-based kernel to select clients
with the most diverse datasets that achieve better performance compared to both
random selection and current DPP-based client selection methods even with less
data utilization in the majority of non-IID cases. This offers a significant
advantage for deployment on edge devices with limited computational resources,
providing a reliable solution for training AI models in massive smart sensor
networks.
|
[
{
"created": "Tue, 26 Mar 2024 02:30:50 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Bastola",
"Ashish",
""
],
[
"Wang",
"Hao",
""
],
[
"Chen",
"Xiwen",
""
],
[
"Razi",
"Abolfazl",
""
]
] |
Many AI platforms, including traffic monitoring systems, use Federated Learning (FL) for decentralized sensor data processing for learning-based applications while preserving privacy and ensuring secured information transfer. On the other hand, applying supervised learning to large data samples, like high-resolution images requires intensive human labor to label different parts of a data sample. Multiple Instance Learning (MIL) alleviates this challenge by operating over labels assigned to the 'bag' of instances. In this paper, we introduce Federated Multiple-Instance Learning (FedMIL). This framework applies federated learning to boost the training performance in video-based MIL tasks such as vehicle accident detection using distributed CCTV networks. However, data sources in decentralized settings are not typically Independently and Identically Distributed (IID), making client selection imperative to collectively represent the entire dataset with minimal clients. To address this challenge, we propose DPPQ, a framework based on the Determinantal Point Process (DPP) with a quality-based kernel to select clients with the most diverse datasets that achieve better performance compared to both random selection and current DPP-based client selection methods even with less data utilization in the majority of non-IID cases. This offers a significant advantage for deployment on edge devices with limited computational resources, providing a reliable solution for training AI models in massive smart sensor networks.
|
1206.6225
|
Saeid Abolfazli
|
Muhammad Shiraz, Saeid Abolfazli, Zohreh Sanaei, Abdullah Gani
|
Virtual Machine Migration: A Resource Intensive Outsourcing Mechanism
for Mobile Cloud Computing
|
This article has been withdrawn by the author due to some copyright
issues
|
Muhammad Shiraz, Saeid Abolfazli, Zohreh Sanaei, Abdullah Gani,
Virtual Machine Migration: A Resource Intensive Outsourcing Mechanism for
Mobile Cloud Computing. Archives DesSciences Journal, Vol.65, No.6, 2012.
ISSN: 1661-464X
| null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Mobile Cloud Computing (MCC), Virtual Machine (VM) migration based process
offloading is a dominant approach to enhance Smart Mobile Devices (SMDs). A
challenging aspect of VM deployment is the additional computing resources usage
in the deployment and management of VM which obliges computing resources for VM
creation and configuration. The management of VM comprises computing resources
exploitation in the monitoring of VM in entire lifecycle and physical resources
management for VM on SMDs. Therefore, VM migration based application offloading
requires additional computing resource. Consequently computing resources demand
and execution time of the application increases respectively. In this paper, we
empirically review the impact of VM deployment and management on the execution
time of application in diverse scenarios. We investigate VM deployment and
management for application processing in simulation environment by employing
CloudSim: a simulation toolkit that provides an extensible simulation framework
to model VM deployment and management for application processing in cloud
infrastructure. The significance of this work is to ensure that VM deployment
and management necessitates additional computing resources on SMD for
application offloading. We evaluate VM deployment and management in application
processing by analyzing Key Performance Parameters (KPPs) in different
scenarios; such as VM deployment, the execution time of applications, and total
execution time of the simulation. We use KPPs to assess deviations in the
results of diverse experimental scenarios. The empirical analysis concludes
that VM deployment and management oblige additional resources on computing host
which make it a heavyweight approach for process offloading on smart mobile
device.
|
[
{
"created": "Wed, 27 Jun 2012 10:23:14 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Jul 2012 10:51:08 GMT",
"version": "v2"
},
{
"created": "Fri, 30 Nov 2012 01:18:13 GMT",
"version": "v3"
}
] |
2012-12-03
|
[
[
"Shiraz",
"Muhammad",
""
],
[
"Abolfazli",
"Saeid",
""
],
[
"Sanaei",
"Zohreh",
""
],
[
"Gani",
"Abdullah",
""
]
] |
In Mobile Cloud Computing (MCC), Virtual Machine (VM) migration based process offloading is a dominant approach to enhance Smart Mobile Devices (SMDs). A challenging aspect of VM deployment is the additional computing resources usage in the deployment and management of VM which obliges computing resources for VM creation and configuration. The management of VM comprises computing resources exploitation in the monitoring of VM in entire lifecycle and physical resources management for VM on SMDs. Therefore, VM migration based application offloading requires additional computing resource. Consequently computing resources demand and execution time of the application increases respectively. In this paper, we empirically review the impact of VM deployment and management on the execution time of application in diverse scenarios. We investigate VM deployment and management for application processing in simulation environment by employing CloudSim: a simulation toolkit that provides an extensible simulation framework to model VM deployment and management for application processing in cloud infrastructure. The significance of this work is to ensure that VM deployment and management necessitates additional computing resources on SMD for application offloading. We evaluate VM deployment and management in application processing by analyzing Key Performance Parameters (KPPs) in different scenarios; such as VM deployment, the execution time of applications, and total execution time of the simulation. We use KPPs to assess deviations in the results of diverse experimental scenarios. The empirical analysis concludes that VM deployment and management oblige additional resources on computing host which make it a heavyweight approach for process offloading on smart mobile device.
|
2212.07614
|
Gang Wang
|
Gang Wang
|
Power Allocation in Two-way Relaying Networks
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study relay selection and power allocation in two-way
relaying networks consisting of a source, a destination and multiply
half-duplex decode-and-forward (DF) relays. A transmission model with three
time subslots is purposely introduced. In the first subslot, selected relay
applies time-switching protocol to harvest radio frequency energy radiated by
source and destination; in the remaining subslots, selected relay facilitates
source and destination to exchange information. Due to finite-size data buffer
and finite-size battery of relay, an optimal relay selection and power
allocation policy is proposed, in order to maximize networks sum-throughput.
One obstacle is the inherent non-convex property of the underlying
sum-throughput optimization problem. By carefully decoupling the multiplicative
variables and relaxing binary variable to a real number, we convert this
problem into a convex optimization one and then Karush-Kuhn-Tucker (KKT)
conditions are used to solve it. Extensive simulations have been conducted to
demonstrate the improved sum-throughput with our proposed strategy.
|
[
{
"created": "Thu, 15 Dec 2022 04:35:25 GMT",
"version": "v1"
}
] |
2022-12-16
|
[
[
"Wang",
"Gang",
""
]
] |
In this paper, we study relay selection and power allocation in two-way relaying networks consisting of a source, a destination and multiply half-duplex decode-and-forward (DF) relays. A transmission model with three time subslots is purposely introduced. In the first subslot, selected relay applies time-switching protocol to harvest radio frequency energy radiated by source and destination; in the remaining subslots, selected relay facilitates source and destination to exchange information. Due to finite-size data buffer and finite-size battery of relay, an optimal relay selection and power allocation policy is proposed, in order to maximize networks sum-throughput. One obstacle is the inherent non-convex property of the underlying sum-throughput optimization problem. By carefully decoupling the multiplicative variables and relaxing binary variable to a real number, we convert this problem into a convex optimization one and then Karush-Kuhn-Tucker (KKT) conditions are used to solve it. Extensive simulations have been conducted to demonstrate the improved sum-throughput with our proposed strategy.
|
2104.05808
|
George Kesidis
|
Zhen Xiang, David J. Miller, Siheng Chen, Xi Li, and George Kesidis
|
A Backdoor Attack against 3D Point Cloud Classifiers
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vulnerability of 3D point cloud (PC) classifiers has become a grave concern
due to the popularity of 3D sensors in safety-critical applications. Existing
adversarial attacks against 3D PC classifiers are all test-time evasion (TTE)
attacks that aim to induce test-time misclassifications using knowledge of the
classifier. But since the victim classifier is usually not accessible to the
attacker, the threat is largely diminished in practice, as PC TTEs typically
have poor transferability. Here, we propose the first backdoor attack (BA)
against PC classifiers. Originally proposed for images, BAs poison the victim
classifier's training set so that the classifier learns to decide to the
attacker's target class whenever the attacker's backdoor pattern is present in
a given input sample. Significantly, BAs do not require knowledge of the victim
classifier. Different from image BAs, we propose to insert a cluster of points
into a PC as a robust backdoor pattern customized for 3D PCs. Such clusters are
also consistent with a physical attack (i.e., with a captured object in a
scene). We optimize the cluster's location using an independently trained
surrogate classifier and choose the cluster's local geometry to evade possible
PC preprocessing and PC anomaly detectors (ADs). Experimentally, our BA
achieves a uniformly high success rate (> 87%) and shows evasiveness against
state-of-the-art PC ADs.
|
[
{
"created": "Mon, 12 Apr 2021 20:47:48 GMT",
"version": "v1"
}
] |
2021-04-14
|
[
[
"Xiang",
"Zhen",
""
],
[
"Miller",
"David J.",
""
],
[
"Chen",
"Siheng",
""
],
[
"Li",
"Xi",
""
],
[
"Kesidis",
"George",
""
]
] |
Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against 3D PC classifiers are all test-time evasion (TTE) attacks that aim to induce test-time misclassifications using knowledge of the classifier. But since the victim classifier is usually not accessible to the attacker, the threat is largely diminished in practice, as PC TTEs typically have poor transferability. Here, we propose the first backdoor attack (BA) against PC classifiers. Originally proposed for images, BAs poison the victim classifier's training set so that the classifier learns to decide to the attacker's target class whenever the attacker's backdoor pattern is present in a given input sample. Significantly, BAs do not require knowledge of the victim classifier. Different from image BAs, we propose to insert a cluster of points into a PC as a robust backdoor pattern customized for 3D PCs. Such clusters are also consistent with a physical attack (i.e., with a captured object in a scene). We optimize the cluster's location using an independently trained surrogate classifier and choose the cluster's local geometry to evade possible PC preprocessing and PC anomaly detectors (ADs). Experimentally, our BA achieves a uniformly high success rate (> 87%) and shows evasiveness against state-of-the-art PC ADs.
|
1907.04713
|
Riccardo Aragona
|
Riccardo Aragona, Francesca Marzi, Filippo Mignosi, Matteo Spezialetti
|
Entropy and Compression: A simple proof of an inequality of
Khinchin-Ornstein-Shields
|
Compared to version 1, in version 2 we added a simpler proof than the
one given by Shields of a more general theorem (Theorem 4, pg. 7) presented
by Ornstein and Shields. Consequently we also modified the title of the
paper. In version 3 we have reordered the sections of the paper, simplified
the proof of Theorem 4 (now Theorem 3) and significantly reduced the proof of
Theorem 3 (now Theorem 4)
|
Problems of Information Transmission, Vo.l 56 No. 1, 2020. A
view-only published version here: https://rdcu.be/b3Cco
|
10.1134/S0032946020010020
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper concerns the folklore statement that ``entropy is a lower bound
for compression''. More precisely we derive from the entropy theorem a simple
proof of a pointwise inequality firstly stated by Ornstein and Shields and
which is the almost-sure version of an average inequality firstly stated by
Khinchin in 1953. We further give an elementary proof of original Khinchin
inequality that can be used as an exercise for Information Theory students and
we conclude by giving historical and technical notes of such inequality.
|
[
{
"created": "Wed, 10 Jul 2019 13:34:13 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2019 08:48:00 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Aug 2019 15:13:17 GMT",
"version": "v3"
},
{
"created": "Wed, 22 Apr 2020 16:28:47 GMT",
"version": "v4"
}
] |
2022-05-25
|
[
[
"Aragona",
"Riccardo",
""
],
[
"Marzi",
"Francesca",
""
],
[
"Mignosi",
"Filippo",
""
],
[
"Spezialetti",
"Matteo",
""
]
] |
This paper concerns the folklore statement that ``entropy is a lower bound for compression''. More precisely we derive from the entropy theorem a simple proof of a pointwise inequality firstly stated by Ornstein and Shields and which is the almost-sure version of an average inequality firstly stated by Khinchin in 1953. We further give an elementary proof of original Khinchin inequality that can be used as an exercise for Information Theory students and we conclude by giving historical and technical notes of such inequality.
|
2308.08717
|
Jianzong Wang
|
Liang Wang, Nan Zhang, Xiaoyang Qu, Jianzong Wang, Jiguang Wan,
Guokuan Li, Kaiyu Hu, Guilin Jiang, Jing Xiao
|
EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge
Devices
|
Accepted by 30th International Conference on Neural Information
Processing (ICONIP 2023)
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time video analytics on edge devices for changing scenes remains a
difficult task. As edge devices are usually resource-constrained, edge deep
neural networks (DNNs) have fewer weights and shallower architectures than
general DNNs. As a result, they only perform well in limited scenarios and are
sensitive to data drift. In this paper, we introduce EdgeMA, a practical and
efficient video analytics system designed to adapt models to shifts in
real-world video streams over time, addressing the data drift problem. EdgeMA
extracts the gray level co-occurrence matrix based statistical texture feature
and uses the Random Forest classifier to detect the domain shift. Moreover, we
have incorporated a method of model adaptation based on importance weighting,
specifically designed to update models to cope with the label distribution
shift. Through rigorous evaluation of EdgeMA on a real-world dataset, our
results illustrate that EdgeMA significantly improves inference accuracy.
|
[
{
"created": "Thu, 17 Aug 2023 00:49:44 GMT",
"version": "v1"
}
] |
2023-08-21
|
[
[
"Wang",
"Liang",
""
],
[
"Zhang",
"Nan",
""
],
[
"Qu",
"Xiaoyang",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Wan",
"Jiguang",
""
],
[
"Li",
"Guokuan",
""
],
[
"Hu",
"Kaiyu",
""
],
[
"Jiang",
"Guilin",
""
],
[
"Xiao",
"Jing",
""
]
] |
Real-time video analytics on edge devices for changing scenes remains a difficult task. As edge devices are usually resource-constrained, edge deep neural networks (DNNs) have fewer weights and shallower architectures than general DNNs. As a result, they only perform well in limited scenarios and are sensitive to data drift. In this paper, we introduce EdgeMA, a practical and efficient video analytics system designed to adapt models to shifts in real-world video streams over time, addressing the data drift problem. EdgeMA extracts the gray level co-occurrence matrix based statistical texture feature and uses the Random Forest classifier to detect the domain shift. Moreover, we have incorporated a method of model adaptation based on importance weighting, specifically designed to update models to cope with the label distribution shift. Through rigorous evaluation of EdgeMA on a real-world dataset, our results illustrate that EdgeMA significantly improves inference accuracy.
|
1310.6808
|
Dr. Mohammad Shahidul Islam
|
Mohammad shahidul Islam
|
Gender Classification Using Gradient Direction Pattern
|
3 pages, 5 figures, 3 tables, SCI journal
|
Sci.Int(Lahore),25(4),797-799,2013 ISSN 1013-5316; CODEN: SINTE 8
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel methodology for gender classification is presented in this paper. It
extracts feature from local region of a face using gray color intensity
difference. The facial area is divided into sub-regions and GDP histogram
extracted from those regions are concatenated into a single vector to represent
the face. The classification accuracy obtained by using support vector machine
has outperformed all traditional feature descriptors for gender classification.
It is evaluated on the images collected from FERET database and obtained very
high accuracy.
|
[
{
"created": "Fri, 25 Oct 2013 02:37:44 GMT",
"version": "v1"
}
] |
2013-10-28
|
[
[
"Islam",
"Mohammad shahidul",
""
]
] |
A novel methodology for gender classification is presented in this paper. It extracts feature from local region of a face using gray color intensity difference. The facial area is divided into sub-regions and GDP histogram extracted from those regions are concatenated into a single vector to represent the face. The classification accuracy obtained by using support vector machine has outperformed all traditional feature descriptors for gender classification. It is evaluated on the images collected from FERET database and obtained very high accuracy.
|
2011.13300
|
Segev Wasserkrug
|
Segev Wasserkrug and Eitan Farchi
|
A Game Theoretic Model for Strategic Coopetition in Business Networks
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Private blockchain is driving the creation of business networks, resulting in
the creation of new value or new business models to the enterprises
participating in the network. Such business networks form when enterprises come
together to derive value through a network which is greater than the value that
can be derived solely by any single company. This results in a setting that
combines both competitive and cooperative behavior, and which we call strategic
coopetition. Traditionally, cooperative and competitive behavior have been
analyzed separately in game theory. In this article, we provide a formal model
enabling to jointly analyze these different types of behaviors and the
interdependencies between them. Using this model, we formally demonstrate and
analyze the incentives for both cooperative and competitive behavior.
|
[
{
"created": "Thu, 26 Nov 2020 14:06:13 GMT",
"version": "v1"
}
] |
2020-12-09
|
[
[
"Wasserkrug",
"Segev",
""
],
[
"Farchi",
"Eitan",
""
]
] |
Private blockchain is driving the creation of business networks, resulting in the creation of new value or new business models to the enterprises participating in the network. Such business networks form when enterprises come together to derive value through a network which is greater than the value that can be derived solely by any single company. This results in a setting that combines both competitive and cooperative behavior, and which we call strategic coopetition. Traditionally, cooperative and competitive behavior have been analyzed separately in game theory. In this article, we provide a formal model enabling to jointly analyze these different types of behaviors and the interdependencies between them. Using this model, we formally demonstrate and analyze the incentives for both cooperative and competitive behavior.
|
2112.01708
|
Ashkan Pourkand
|
Ruisi Zhang and Ashkan Pourkand
|
Emergency-braking Distance Prediction using Deep Learning
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Predicting emergency-braking distance is important for the collision
avoidance related features, which are the most essential and popular safety
features for vehicles. In this study, we first gathered a large data set
including a three-dimensional acceleration data and the corresponding
emergency-braking distance. Using this data set, we propose a deep-learning
model to predict emergency-braking distance, which only requires 0.25 seconds
three-dimensional vehicle acceleration data before the break as input. We
consider two road surfaces, our deep learning approach is robust to both road
surfaces and have accuracy within 3 feet.
|
[
{
"created": "Fri, 3 Dec 2021 04:36:14 GMT",
"version": "v1"
}
] |
2021-12-06
|
[
[
"Zhang",
"Ruisi",
""
],
[
"Pourkand",
"Ashkan",
""
]
] |
Predicting emergency-braking distance is important for the collision avoidance related features, which are the most essential and popular safety features for vehicles. In this study, we first gathered a large data set including a three-dimensional acceleration data and the corresponding emergency-braking distance. Using this data set, we propose a deep-learning model to predict emergency-braking distance, which only requires 0.25 seconds three-dimensional vehicle acceleration data before the break as input. We consider two road surfaces, our deep learning approach is robust to both road surfaces and have accuracy within 3 feet.
|
2109.00805
|
Ruicong Huang
|
Ruicong Huang
|
Brief View and Analysis to Latest Android Security Issues and Approaches
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to the continuous improvement of performance and functions, Android
remains the most popular operating system on mobile phone today. However,
various malicious applications bring great threats to the system. Over the past
few years, significant changes occured in both malwares and counter measures.
Specifically, malwares are continuously evolving, and advanced approaches are
adopted for more accurate detection. To keep up with the latest situation, in
this paper, we conduct a wide range of analysis, including latest malwares,
Android security features, and approaches. We also provide some finding when we
are gathering information and carrying on experiments, which we think is useful
for further researches and has not been mentioned in previous works.
|
[
{
"created": "Thu, 2 Sep 2021 09:34:11 GMT",
"version": "v1"
}
] |
2021-09-03
|
[
[
"Huang",
"Ruicong",
""
]
] |
Due to the continuous improvement of performance and functions, Android remains the most popular operating system on mobile phone today. However, various malicious applications bring great threats to the system. Over the past few years, significant changes occured in both malwares and counter measures. Specifically, malwares are continuously evolving, and advanced approaches are adopted for more accurate detection. To keep up with the latest situation, in this paper, we conduct a wide range of analysis, including latest malwares, Android security features, and approaches. We also provide some finding when we are gathering information and carrying on experiments, which we think is useful for further researches and has not been mentioned in previous works.
|
1803.10553
|
Takashi Ikegawa
|
Takashi Ikegawa
|
Effect of payload size on mean response time when message segmentations
occur using $\rm{M}^{\rm X}/\rm{G}/1$ queueing model
| null | null | null | null |
cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the $\rm{M}^{\rm X}/\rm{G}/1$ queueing model to represent
arrivals of segmented packets when message segmentations occur. This queueing
model enables us to derive the closed form of mean response time, given payload
size, message size distribution and message arrival rate. From a numerical
result, we show that the mean response time is more convex in payload sizes if
message arrival rate is larger in a scenario where Web objects are delivered
over a physical link.
|
[
{
"created": "Wed, 28 Mar 2018 12:10:26 GMT",
"version": "v1"
}
] |
2018-03-29
|
[
[
"Ikegawa",
"Takashi",
""
]
] |
This paper proposes the $\rm{M}^{\rm X}/\rm{G}/1$ queueing model to represent arrivals of segmented packets when message segmentations occur. This queueing model enables us to derive the closed form of mean response time, given payload size, message size distribution and message arrival rate. From a numerical result, we show that the mean response time is more convex in payload sizes if message arrival rate is larger in a scenario where Web objects are delivered over a physical link.
|
1809.06131
|
Bowen Cheng
|
Bowen Cheng, Rong Xiao, Yandong Guo, Yuxiao Hu, Jianfeng Wang, Lei
Zhang
|
Revisit Multinomial Logistic Regression in Deep Learning: Data Dependent
Model Initialization for Image Recognition
|
tech report
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study in this paper how to initialize the parameters of multinomial
logistic regression (a fully connected layer followed with softmax and cross
entropy loss), which is widely used in deep neural network (DNN) models for
classification problems. As logistic regression is widely known not having a
closed-form solution, it is usually randomly initialized, leading to several
deficiencies especially in transfer learning where all the layers except for
the last task-specific layer are initialized using a pre-trained model. The
deficiencies include slow convergence speed, possibility of stuck in local
minimum, and the risk of over-fitting. To address those deficiencies, we first
study the properties of logistic regression and propose a closed-form
approximate solution named regularized Gaussian classifier (RGC). Then we adopt
this approximate solution to initialize the task-specific linear layer and
demonstrate superior performance over random initialization in terms of both
accuracy and convergence speed on various tasks and datasets. For example, for
image classification, our approach can reduce the training time by 10 times and
achieve 3.2% gain in accuracy for Flickr-style classification. For object
detection, our approach can also be 10 times faster in training for the same
accuracy, or 5% better in terms of mAP for VOC 2007 with slightly longer
training.
|
[
{
"created": "Mon, 17 Sep 2018 11:23:33 GMT",
"version": "v1"
}
] |
2018-09-18
|
[
[
"Cheng",
"Bowen",
""
],
[
"Xiao",
"Rong",
""
],
[
"Guo",
"Yandong",
""
],
[
"Hu",
"Yuxiao",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Zhang",
"Lei",
""
]
] |
We study in this paper how to initialize the parameters of multinomial logistic regression (a fully connected layer followed with softmax and cross entropy loss), which is widely used in deep neural network (DNN) models for classification problems. As logistic regression is widely known not having a closed-form solution, it is usually randomly initialized, leading to several deficiencies especially in transfer learning where all the layers except for the last task-specific layer are initialized using a pre-trained model. The deficiencies include slow convergence speed, possibility of stuck in local minimum, and the risk of over-fitting. To address those deficiencies, we first study the properties of logistic regression and propose a closed-form approximate solution named regularized Gaussian classifier (RGC). Then we adopt this approximate solution to initialize the task-specific linear layer and demonstrate superior performance over random initialization in terms of both accuracy and convergence speed on various tasks and datasets. For example, for image classification, our approach can reduce the training time by 10 times and achieve 3.2% gain in accuracy for Flickr-style classification. For object detection, our approach can also be 10 times faster in training for the same accuracy, or 5% better in terms of mAP for VOC 2007 with slightly longer training.
|
2111.02295
|
Andreas Emil Feldmann
|
Andreas Emil Feldmann, Anish Mukherjee, Erik Jan van Leeuwen
|
The Parameterized Complexity of the Survivable Network Design Problem
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
For the well-known Survivable Network Design Problem (SNDP) we are given an
undirected graph $G$ with edge costs, a set $R$ of terminal vertices, and an
integer demand $d_{s,t}$ for every terminal pair $s,t\in R$. The task is to
compute a subgraph $H$ of $G$ of minimum cost, such that there are at least
$d_{s,t}$ disjoint paths between $s$ and $t$ in $H$. If the paths are required
to be edge-disjoint we obtain the edge-connectivity variant (EC-SNDP), while
internally vertex-disjoint paths result in the vertex-connectivity variant
(VC-SNDP). Another important case is the element-connectivity variant
(LC-SNDP), where the paths are disjoint on edges and non-terminals.
In this work we shed light on the parameterized complexity of the above
problems. We consider several natural parameters, which include the solution
size $\ell$, the sum of demands $D$, the number of terminals $k$, and the
maximum demand $d_\max$. Using simple, elegant arguments, we prove the
following results.
- We give a complete picture of the parameterized tractability of the three
variants w.r.t. parameter $\ell$: both EC-SNDP and LC-SNDP are FPT, while
VC-SNDP is W[1]-hard.
- We identify some special cases of VC-SNDP that are FPT:
* when $d_\max\leq 3$ for parameter $\ell$,
* on locally bounded treewidth graphs (e.g., planar graphs) for parameter
$\ell$, and
* on graphs of treewidth $tw$ for parameter $tw+D$.
- The well-known Directed Steiner Tree (DST) problem can be seen as
single-source EC-SNDP with $d_\max=1$ on directed graphs, and is FPT
parameterized by $k$ [Dreyfus & Wagner 1971]. We show that in contrast, the
2-DST problem, where $d_\max=2$, is W[1]-hard, even when parameterized by
$\ell$.
|
[
{
"created": "Wed, 3 Nov 2021 15:28:29 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Oct 2022 13:17:13 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Oct 2022 16:14:43 GMT",
"version": "v3"
},
{
"created": "Tue, 8 Nov 2022 10:06:01 GMT",
"version": "v4"
}
] |
2022-11-09
|
[
[
"Feldmann",
"Andreas Emil",
""
],
[
"Mukherjee",
"Anish",
""
],
[
"van Leeuwen",
"Erik Jan",
""
]
] |
For the well-known Survivable Network Design Problem (SNDP) we are given an undirected graph $G$ with edge costs, a set $R$ of terminal vertices, and an integer demand $d_{s,t}$ for every terminal pair $s,t\in R$. The task is to compute a subgraph $H$ of $G$ of minimum cost, such that there are at least $d_{s,t}$ disjoint paths between $s$ and $t$ in $H$. If the paths are required to be edge-disjoint we obtain the edge-connectivity variant (EC-SNDP), while internally vertex-disjoint paths result in the vertex-connectivity variant (VC-SNDP). Another important case is the element-connectivity variant (LC-SNDP), where the paths are disjoint on edges and non-terminals. In this work we shed light on the parameterized complexity of the above problems. We consider several natural parameters, which include the solution size $\ell$, the sum of demands $D$, the number of terminals $k$, and the maximum demand $d_\max$. Using simple, elegant arguments, we prove the following results. - We give a complete picture of the parameterized tractability of the three variants w.r.t. parameter $\ell$: both EC-SNDP and LC-SNDP are FPT, while VC-SNDP is W[1]-hard. - We identify some special cases of VC-SNDP that are FPT: * when $d_\max\leq 3$ for parameter $\ell$, * on locally bounded treewidth graphs (e.g., planar graphs) for parameter $\ell$, and * on graphs of treewidth $tw$ for parameter $tw+D$. - The well-known Directed Steiner Tree (DST) problem can be seen as single-source EC-SNDP with $d_\max=1$ on directed graphs, and is FPT parameterized by $k$ [Dreyfus & Wagner 1971]. We show that in contrast, the 2-DST problem, where $d_\max=2$, is W[1]-hard, even when parameterized by $\ell$.
|
2109.10397
|
Andrey Kutuzov
|
Mario Giulianelli, Andrey Kutuzov, Lidia Pivovarova
|
Grammatical Profiling for Semantic Change Detection
|
CoNLL 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Semantics, morphology and syntax are strongly interdependent. However, the
majority of computational methods for semantic change detection use
distributional word representations which encode mostly semantics. We
investigate an alternative method, grammatical profiling, based entirely on
changes in the morphosyntactic behaviour of words. We demonstrate that it can
be used for semantic change detection and even outperforms some distributional
semantic methods. We present an in-depth qualitative and quantitative analysis
of the predictions made by our grammatical profiling system, showing that they
are plausible and interpretable.
|
[
{
"created": "Tue, 21 Sep 2021 18:38:18 GMT",
"version": "v1"
}
] |
2021-09-23
|
[
[
"Giulianelli",
"Mario",
""
],
[
"Kutuzov",
"Andrey",
""
],
[
"Pivovarova",
"Lidia",
""
]
] |
Semantics, morphology and syntax are strongly interdependent. However, the majority of computational methods for semantic change detection use distributional word representations which encode mostly semantics. We investigate an alternative method, grammatical profiling, based entirely on changes in the morphosyntactic behaviour of words. We demonstrate that it can be used for semantic change detection and even outperforms some distributional semantic methods. We present an in-depth qualitative and quantitative analysis of the predictions made by our grammatical profiling system, showing that they are plausible and interpretable.
|
2201.10485
|
Tobias Kapp\'e
|
Jana Wagemaker and Nate Foster and Tobias Kapp\'e and Dexter Kozen and
Jurriaan Rot and Alexandra Silva
|
Concurrent NetKAT: Modeling and analyzing stateful, concurrent networks
| null |
Proc. ESOP 2022, pp 575-602
|
10.1007/978-3-030-99336-8_21
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Concurrent NetKAT (CNetKAT), an extension of NetKAT with
operators for specifying and reasoning about concurrency in scenarios where
multiple packets interact through state. We provide a model of the language
based on partially-ordered multisets (pomsets), which are a well-established
mathematical structure for defining the denotational semantics of concurrent
languages. We provide a sound and complete axiomatization of this model, and we
illustrate the use of CNetKAT through examples. More generally, CNetKAT can be
understood as an algebraic framework for reasoning about programs with both
local state (in packets) and global state (in a global store).
|
[
{
"created": "Tue, 25 Jan 2022 17:27:22 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jan 2022 09:37:53 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jul 2022 09:12:42 GMT",
"version": "v3"
}
] |
2023-02-03
|
[
[
"Wagemaker",
"Jana",
""
],
[
"Foster",
"Nate",
""
],
[
"Kappé",
"Tobias",
""
],
[
"Kozen",
"Dexter",
""
],
[
"Rot",
"Jurriaan",
""
],
[
"Silva",
"Alexandra",
""
]
] |
We introduce Concurrent NetKAT (CNetKAT), an extension of NetKAT with operators for specifying and reasoning about concurrency in scenarios where multiple packets interact through state. We provide a model of the language based on partially-ordered multisets (pomsets), which are a well-established mathematical structure for defining the denotational semantics of concurrent languages. We provide a sound and complete axiomatization of this model, and we illustrate the use of CNetKAT through examples. More generally, CNetKAT can be understood as an algebraic framework for reasoning about programs with both local state (in packets) and global state (in a global store).
|
2405.01494
|
Mat\'ias Mendieta
|
Matias Mendieta, Guangyu Sun, Chen Chen
|
Navigating Heterogeneity and Privacy in One-Shot Federated Learning with
Diffusion Models
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) enables multiple clients to train models collectively
while preserving data privacy. However, FL faces challenges in terms of
communication cost and data heterogeneity. One-shot federated learning has
emerged as a solution by reducing communication rounds, improving efficiency,
and providing better security against eavesdropping attacks. Nevertheless, data
heterogeneity remains a significant challenge, impacting performance. This work
explores the effectiveness of diffusion models in one-shot FL, demonstrating
their applicability in addressing data heterogeneity and improving FL
performance. Additionally, we investigate the utility of our diffusion model
approach, FedDiff, compared to other one-shot FL methods under differential
privacy (DP). Furthermore, to improve generated sample quality under DP
settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method,
enhancing the effectiveness of generated data for global model training.
|
[
{
"created": "Thu, 2 May 2024 17:26:52 GMT",
"version": "v1"
}
] |
2024-05-03
|
[
[
"Mendieta",
"Matias",
""
],
[
"Sun",
"Guangyu",
""
],
[
"Chen",
"Chen",
""
]
] |
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy. However, FL faces challenges in terms of communication cost and data heterogeneity. One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks. Nevertheless, data heterogeneity remains a significant challenge, impacting performance. This work explores the effectiveness of diffusion models in one-shot FL, demonstrating their applicability in addressing data heterogeneity and improving FL performance. Additionally, we investigate the utility of our diffusion model approach, FedDiff, compared to other one-shot FL methods under differential privacy (DP). Furthermore, to improve generated sample quality under DP settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method, enhancing the effectiveness of generated data for global model training.
|
2407.07710
|
Huawei Wu
|
Sihem Mesnager and Huawei Wu
|
On the differential and Walsh spectra of $x^{2q+1}$ over
$\mathbb{F}_{q^2}$
| null | null | null | null |
cs.CR cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $q$ be an odd prime power and let $\mathbb{F}_{q^2}$ be the finite field
with $q^2$ elements. In this paper, we determine the differential spectrum of
the power function $F(x)=x^{2q+1}$ over $\mathbb{F}_{q^2}$. When the
characteristic of $\mathbb{F}_{q^2}$ is $3$, we also determine the value
distribution of the Walsh spectrum of $F$, showing that it is $4$-valued, and
use the obtained result to determine the weight distribution of a $4$-weight
cyclic code.
|
[
{
"created": "Mon, 8 Jul 2024 14:01:06 GMT",
"version": "v1"
}
] |
2024-07-11
|
[
[
"Mesnager",
"Sihem",
""
],
[
"Wu",
"Huawei",
""
]
] |
Let $q$ be an odd prime power and let $\mathbb{F}_{q^2}$ be the finite field with $q^2$ elements. In this paper, we determine the differential spectrum of the power function $F(x)=x^{2q+1}$ over $\mathbb{F}_{q^2}$. When the characteristic of $\mathbb{F}_{q^2}$ is $3$, we also determine the value distribution of the Walsh spectrum of $F$, showing that it is $4$-valued, and use the obtained result to determine the weight distribution of a $4$-weight cyclic code.
|
1511.04557
|
Lionel Arend
|
Lionel Arend, Jens Krause, Michel Marso, Ray Sperber
|
Four-dimensional signalling schemes - Application to satellite
communications
|
14 pages, 9 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In satellite communications both polarizations of an electromagnetic wave are
used to transmit two separate signals. These two independent signals can be
merged to form one dual-polarization, four-dimensional signal.
The present article pursues this idea and proposes different signal
constellations to be used for four-dimensional signalling in satellite links.
Analytical methods and simulations predict an increased power efficiency of
these constellations with respect to currently used transmission methods. The
cost of this advantage is evaluated considering the limited applicability in
non-linear channels.
Four-dimensional signalling also implies simultaneous reception on both
polarizations. Such a combined reception allows the precision of timing and
carrier recovery loops to be doubled. This claim is derived analytically and
illustrated by simulating an example case.
An experimental transmitter/receiver pair was implemented and used to
demonstrate a satellite transmission using a four-dimensional, bi-orthogonal
signal in the dual-polarization channel. The experimental verification confirms
the presented simulation results.
|
[
{
"created": "Sat, 14 Nov 2015 12:55:03 GMT",
"version": "v1"
}
] |
2015-11-17
|
[
[
"Arend",
"Lionel",
""
],
[
"Krause",
"Jens",
""
],
[
"Marso",
"Michel",
""
],
[
"Sperber",
"Ray",
""
]
] |
In satellite communications both polarizations of an electromagnetic wave are used to transmit two separate signals. These two independent signals can be merged to form one dual-polarization, four-dimensional signal. The present article pursues this idea and proposes different signal constellations to be used for four-dimensional signalling in satellite links. Analytical methods and simulations predict an increased power efficiency of these constellations with respect to currently used transmission methods. The cost of this advantage is evaluated considering the limited applicability in non-linear channels. Four-dimensional signalling also implies simultaneous reception on both polarizations. Such a combined reception allows the precision of timing and carrier recovery loops to be doubled. This claim is derived analytically and illustrated by simulating an example case. An experimental transmitter/receiver pair was implemented and used to demonstrate a satellite transmission using a four-dimensional, bi-orthogonal signal in the dual-polarization channel. The experimental verification confirms the presented simulation results.
|
2401.09716
|
Guanglin Zhou
|
Guanglin Zhou and Zhongyi Han and Shiming Chen and Biwei Huang and
Liming Zhu and Tongliang Liu and Lina Yao and Kun Zhang
|
HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Domain Generalization (DG) endeavors to create machine learning models that
excel in unseen scenarios by learning invariant features. In DG, the prevalent
practice of constraining models to a fixed structure or uniform
parameterization to encapsulate invariant features can inadvertently blend
specific aspects. Such an approach struggles with nuanced differentiation of
inter-domain variations and may exhibit bias towards certain domains, hindering
the precise learning of domain-invariant features. Recognizing this, we
introduce a novel method designed to supplement the model with domain-level and
task-specific characteristics. This approach aims to guide the model in more
effectively separating invariant features from specific characteristics,
thereby boosting the generalization. Building on the emerging trend of visual
prompts in the DG paradigm, our work introduces the novel \textbf{H}ierarchical
\textbf{C}ontrastive \textbf{V}isual \textbf{P}rompt (HCVP) methodology. This
represents a significant advancement in the field, setting itself apart with a
unique generative approach to prompts, alongside an explicit model structure
and specialized loss functions. Differing from traditional visual prompts that
are often shared across entire datasets, HCVP utilizes a hierarchical prompt
generation network enhanced by prompt contrastive learning. These generative
prompts are instance-dependent, catering to the unique characteristics inherent
to different domains and tasks. Additionally, we devise a prompt modulation
network that serves as a bridge, effectively incorporating the generated visual
prompts into the vision transformer backbone. Experiments conducted on five DG
datasets demonstrate the effectiveness of HCVP, outperforming both established
DG algorithms and adaptation protocols.
|
[
{
"created": "Thu, 18 Jan 2024 04:23:21 GMT",
"version": "v1"
}
] |
2024-01-19
|
[
[
"Zhou",
"Guanglin",
""
],
[
"Han",
"Zhongyi",
""
],
[
"Chen",
"Shiming",
""
],
[
"Huang",
"Biwei",
""
],
[
"Zhu",
"Liming",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Yao",
"Lina",
""
],
[
"Zhang",
"Kun",
""
]
] |
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features. In DG, the prevalent practice of constraining models to a fixed structure or uniform parameterization to encapsulate invariant features can inadvertently blend specific aspects. Such an approach struggles with nuanced differentiation of inter-domain variations and may exhibit bias towards certain domains, hindering the precise learning of domain-invariant features. Recognizing this, we introduce a novel method designed to supplement the model with domain-level and task-specific characteristics. This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization. Building on the emerging trend of visual prompts in the DG paradigm, our work introduces the novel \textbf{H}ierarchical \textbf{C}ontrastive \textbf{V}isual \textbf{P}rompt (HCVP) methodology. This represents a significant advancement in the field, setting itself apart with a unique generative approach to prompts, alongside an explicit model structure and specialized loss functions. Differing from traditional visual prompts that are often shared across entire datasets, HCVP utilizes a hierarchical prompt generation network enhanced by prompt contrastive learning. These generative prompts are instance-dependent, catering to the unique characteristics inherent to different domains and tasks. Additionally, we devise a prompt modulation network that serves as a bridge, effectively incorporating the generated visual prompts into the vision transformer backbone. Experiments conducted on five DG datasets demonstrate the effectiveness of HCVP, outperforming both established DG algorithms and adaptation protocols.
|
2308.12751
|
Paul Starke
|
Paul Starke, Sebastian Starke, Taku Komura, Frank Steinicke
|
Motion In-Betweening with Phase Manifolds
|
17 pages, 11 figures, conference
|
ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA),
August 4-6, 2023, Los Angeles, CA, USA
|
10.1145/3606921
| null |
cs.GR cs.AI cs.LG cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a novel data-driven motion in-betweening system to
reach target poses of characters by making use of phases variables learned by a
Periodic Autoencoder. Our approach utilizes a mixture-of-experts neural network
model, in which the phases cluster movements in both space and time with
different expert weights. Each generated set of weights then produces a
sequence of poses in an autoregressive manner between the current and target
state of the character. In addition, to satisfy poses which are manually
modified by the animators or where certain end effectors serve as constraints
to be reached by the animation, a learned bi-directional control scheme is
implemented to satisfy such constraints. The results demonstrate that using
phases for motion in-betweening tasks sharpen the interpolated movements, and
furthermore stabilizes the learning process. Moreover, using phases for motion
in-betweening tasks can also synthesize more challenging movements beyond
locomotion behaviors. Additionally, style control is enabled between given
target keyframes. Our proposed framework can compete with popular
state-of-the-art methods for motion in-betweening in terms of motion quality
and generalization, especially in the existence of long transition durations.
Our framework contributes to faster prototyping workflows for creating animated
character sequences, which is of enormous interest for the game and film
industry.
|
[
{
"created": "Thu, 24 Aug 2023 12:56:39 GMT",
"version": "v1"
}
] |
2023-08-25
|
[
[
"Starke",
"Paul",
""
],
[
"Starke",
"Sebastian",
""
],
[
"Komura",
"Taku",
""
],
[
"Steinicke",
"Frank",
""
]
] |
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder. Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights. Each generated set of weights then produces a sequence of poses in an autoregressive manner between the current and target state of the character. In addition, to satisfy poses which are manually modified by the animators or where certain end effectors serve as constraints to be reached by the animation, a learned bi-directional control scheme is implemented to satisfy such constraints. The results demonstrate that using phases for motion in-betweening tasks sharpen the interpolated movements, and furthermore stabilizes the learning process. Moreover, using phases for motion in-betweening tasks can also synthesize more challenging movements beyond locomotion behaviors. Additionally, style control is enabled between given target keyframes. Our proposed framework can compete with popular state-of-the-art methods for motion in-betweening in terms of motion quality and generalization, especially in the existence of long transition durations. Our framework contributes to faster prototyping workflows for creating animated character sequences, which is of enormous interest for the game and film industry.
|
1809.04772
|
EPTCS
|
Ant\'onio Ravara (NOVA LINCS and Dep of Informatics, FCT, NOVA
University of Lisbon)
|
A Simple Functional Presentation and an Inductive Correctness Proof of
the Horn Algorithm
|
In Proceedings HCVS 2018, arXiv:1809.04554
|
EPTCS 278, 2018, pp. 34-48
|
10.4204/EPTCS.278.6
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a recursive formulation of the Horn algorithm for deciding the
satisfiability of propositional clauses. The usual presentations in imperative
pseudo-code are informal and not suitable for simple proofs of its main
properties. By defining the algorithm as a recursive function (computing a
least fixed-point), we achieve: 1) a concise, yet rigorous, formalisation; 2) a
clear form of visualising executions of the algorithm, step-by-step; 3) precise
results, simple to state and with clean inductive proofs.
|
[
{
"created": "Thu, 13 Sep 2018 04:49:05 GMT",
"version": "v1"
}
] |
2018-09-14
|
[
[
"Ravara",
"António",
"",
"NOVA LINCS and Dep of Informatics, FCT, NOVA\n University of Lisbon"
]
] |
We present a recursive formulation of the Horn algorithm for deciding the satisfiability of propositional clauses. The usual presentations in imperative pseudo-code are informal and not suitable for simple proofs of its main properties. By defining the algorithm as a recursive function (computing a least fixed-point), we achieve: 1) a concise, yet rigorous, formalisation; 2) a clear form of visualising executions of the algorithm, step-by-step; 3) precise results, simple to state and with clean inductive proofs.
|
2102.13062
|
Ryan Killick
|
J. Czyzowicz and S. Dobrev and R. Killick and E. Kranakis and D.
Krizanc and L. Narayanan and J. Opatrny and D. Pankratov and S. Shende
|
Graph Exploration by Energy-Sharing Mobile Agents
|
21 pages, 4 figures, full version of the paper appearing in the
proceedings of SIROCCO 2021
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the problem of collective exploration of a known $n$-node
edge-weighted graph by $k$ mobile agents that have limited energy but are
capable of energy transfers. The agents are initially placed at an arbitrary
subset of nodes in the graph, and each agent has an initial, possibly
different, amount of energy. The goal of the exploration problem is for every
edge in the graph to be traversed by at least one agent. The amount of energy
used by an agent to travel distance $x$ is proportional to $x$. In our model,
the agents can {\em share} energy when co-located: when two agents meet, one
can transfer part of its energy to the other.
For an $n$-node path, we give an $O(n+k)$ time algorithm that either finds an
exploration strategy, or reports that one does not exist. For an $n$-node tree
with $\ell $ leaves, we give an $O(n+ \ell k^2)$ algorithm that finds an
exploration strategy if one exists. Finally, for the general graph case, we
show that the problem of deciding if exploration is possible by energy-sharing
agents is NP-hard, even for 3-regular graphs. In addition, we show that it is
always possible to find an exploration strategy if the total energy of the
agents is at least twice the total weight of the edges; moreover, this is
asymptotically optimal.
|
[
{
"created": "Thu, 25 Feb 2021 18:15:00 GMT",
"version": "v1"
}
] |
2021-02-26
|
[
[
"Czyzowicz",
"J.",
""
],
[
"Dobrev",
"S.",
""
],
[
"Killick",
"R.",
""
],
[
"Kranakis",
"E.",
""
],
[
"Krizanc",
"D.",
""
],
[
"Narayanan",
"L.",
""
],
[
"Opatrny",
"J.",
""
],
[
"Pankratov",
"D.",
""
],
[
"Shende",
"S.",
""
]
] |
We consider the problem of collective exploration of a known $n$-node edge-weighted graph by $k$ mobile agents that have limited energy but are capable of energy transfers. The agents are initially placed at an arbitrary subset of nodes in the graph, and each agent has an initial, possibly different, amount of energy. The goal of the exploration problem is for every edge in the graph to be traversed by at least one agent. The amount of energy used by an agent to travel distance $x$ is proportional to $x$. In our model, the agents can {\em share} energy when co-located: when two agents meet, one can transfer part of its energy to the other. For an $n$-node path, we give an $O(n+k)$ time algorithm that either finds an exploration strategy, or reports that one does not exist. For an $n$-node tree with $\ell $ leaves, we give an $O(n+ \ell k^2)$ algorithm that finds an exploration strategy if one exists. Finally, for the general graph case, we show that the problem of deciding if exploration is possible by energy-sharing agents is NP-hard, even for 3-regular graphs. In addition, we show that it is always possible to find an exploration strategy if the total energy of the agents is at least twice the total weight of the edges; moreover, this is asymptotically optimal.
|
2307.10004
|
Xidong Wang
|
Ye Ouyang, Yaqin Zhang, Peng Wang, Yunxin Liu, Wen Qiao, Jun Zhu, Yang
Liu, Feng Zhang, Shuling Wang, Xidong Wang
|
6G Network Business Support System
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
6G is the next-generation intelligent and integrated digital information
infrastructure, characterized by ubiquitous interconnection, native
intelligence, multi-dimensional perception, global coverage, green and
low-carbon, native network security, etc. 6G will realize the transition from
serving people and people-things communication to supporting the efficient
connection of intelligent agents, and comprehensively leading the digital,
intelligent and green transformation of the economy and the society. As the
core support system for mobile communication network, 6 6G BSS need to
integrate with new business models brought about by the development of the
next-generation Internet and IT, upgrade from "network-centric" to "business
and service centric" and "customer-centric". 6G OSS and BSS systems need to
strengthen their integration to improve the operational efficiency and benefits
of customers by connecting the digital intelligence support capabilities on
both sides of supply and demand. This paper provides a detailed introduction to
the overall vision, potential key technologies, and functional architecture of
6G BSS systems. It also presents an evolutionary roadmap and technological
prospects for the BSS systems from 5G to 6G.
|
[
{
"created": "Wed, 19 Jul 2023 14:38:30 GMT",
"version": "v1"
}
] |
2023-07-20
|
[
[
"Ouyang",
"Ye",
""
],
[
"Zhang",
"Yaqin",
""
],
[
"Wang",
"Peng",
""
],
[
"Liu",
"Yunxin",
""
],
[
"Qiao",
"Wen",
""
],
[
"Zhu",
"Jun",
""
],
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Feng",
""
],
[
"Wang",
"Shuling",
""
],
[
"Wang",
"Xidong",
""
]
] |
6G is the next-generation intelligent and integrated digital information infrastructure, characterized by ubiquitous interconnection, native intelligence, multi-dimensional perception, global coverage, green and low-carbon, native network security, etc. 6G will realize the transition from serving people and people-things communication to supporting the efficient connection of intelligent agents, and comprehensively leading the digital, intelligent and green transformation of the economy and the society. As the core support system for mobile communication network, 6 6G BSS need to integrate with new business models brought about by the development of the next-generation Internet and IT, upgrade from "network-centric" to "business and service centric" and "customer-centric". 6G OSS and BSS systems need to strengthen their integration to improve the operational efficiency and benefits of customers by connecting the digital intelligence support capabilities on both sides of supply and demand. This paper provides a detailed introduction to the overall vision, potential key technologies, and functional architecture of 6G BSS systems. It also presents an evolutionary roadmap and technological prospects for the BSS systems from 5G to 6G.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.