id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.11191
|
Matthieu Paul
|
Matthieu Paul, Martin Danelljan, Christoph Mayer, and Luc Van Gool
|
Robust Visual Tracking by Segmentation
|
Accepted at ECCV 2022. Code and trained models are available at:
https://github.com/visionml/pytracking
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating the target extent poses a fundamental challenge in visual object
tracking. Typically, trackers are box-centric and fully rely on a bounding box
to define the target in the scene. In practice, objects often have complex
shapes and are not aligned with the image axis. In these cases, bounding boxes
do not provide an accurate description of the target and often contain a
majority of background pixels. We propose a segmentation-centric tracking
pipeline that not only produces a highly accurate segmentation mask, but also
internally works with segmentation masks instead of bounding boxes. Thus, our
tracker is able to better learn a target representation that clearly
differentiates the target in the scene from background content. In order to
achieve the necessary robustness for the challenging tracking scenario, we
propose a separate instance localization component that is used to condition
the segmentation decoder when producing the output mask. We infer a bounding
box from the segmentation mask, validate our tracker on challenging tracking
datasets and achieve the new state of the art on LaSOT with a success AUC score
of 69.7%. Since most tracking datasets do not contain mask annotations, we
cannot use them to evaluate predicted segmentation masks. Instead, we validate
our segmentation quality on two popular video object segmentation datasets.
|
[
{
"created": "Mon, 21 Mar 2022 17:59:19 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jul 2022 15:59:52 GMT",
"version": "v2"
}
] |
2022-07-21
|
[
[
"Paul",
"Matthieu",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Mayer",
"Christoph",
""
],
[
"Van Gool",
"Luc",
""
]
] |
Estimating the target extent poses a fundamental challenge in visual object tracking. Typically, trackers are box-centric and fully rely on a bounding box to define the target in the scene. In practice, objects often have complex shapes and are not aligned with the image axis. In these cases, bounding boxes do not provide an accurate description of the target and often contain a majority of background pixels. We propose a segmentation-centric tracking pipeline that not only produces a highly accurate segmentation mask, but also internally works with segmentation masks instead of bounding boxes. Thus, our tracker is able to better learn a target representation that clearly differentiates the target in the scene from background content. In order to achieve the necessary robustness for the challenging tracking scenario, we propose a separate instance localization component that is used to condition the segmentation decoder when producing the output mask. We infer a bounding box from the segmentation mask, validate our tracker on challenging tracking datasets and achieve the new state of the art on LaSOT with a success AUC score of 69.7%. Since most tracking datasets do not contain mask annotations, we cannot use them to evaluate predicted segmentation masks. Instead, we validate our segmentation quality on two popular video object segmentation datasets.
|
1511.03415
|
Bernd Flemisch
|
Oliver Sander, Timo Koch, Natalie Schr\"oder, Bernd Flemisch
|
The Dune FoamGrid implementation for surface and network grids
| null |
Archive of Numerical Software Vol 5 No 1 2017
|
10.11588/ans.2017.1.28490
| null |
cs.MS cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FoamGrid, a new implementation of the DUNE grid interface.
FoamGrid implements one- and two-dimensional grids in a physical space of
arbitrary dimension, which allows for grids for curved domains. Even more, the
grids are not expected to have a manifold structure, i.e., more than two
elements can share a common facet. This makes FoamGrid the grid data structure
of choice for simulating structures such as foams, discrete fracture networks,
or network flow problems. FoamGrid implements adaptive non-conforming
refinement with element parametrizations. As an additional feature it allows
removal and addition of elements in an existing grid, which makes FoamGrid
suitable for network growth problems. We show how to use FoamGrid, with
particular attention to the extensions of the grid interface needed to handle
non-manifold topology and grid growth. Three numerical examples demonstrate the
possibilities offered by FoamGrid.
|
[
{
"created": "Wed, 11 Nov 2015 08:23:46 GMT",
"version": "v1"
}
] |
2020-05-01
|
[
[
"Sander",
"Oliver",
""
],
[
"Koch",
"Timo",
""
],
[
"Schröder",
"Natalie",
""
],
[
"Flemisch",
"Bernd",
""
]
] |
We present FoamGrid, a new implementation of the DUNE grid interface. FoamGrid implements one- and two-dimensional grids in a physical space of arbitrary dimension, which allows for grids for curved domains. Even more, the grids are not expected to have a manifold structure, i.e., more than two elements can share a common facet. This makes FoamGrid the grid data structure of choice for simulating structures such as foams, discrete fracture networks, or network flow problems. FoamGrid implements adaptive non-conforming refinement with element parametrizations. As an additional feature it allows removal and addition of elements in an existing grid, which makes FoamGrid suitable for network growth problems. We show how to use FoamGrid, with particular attention to the extensions of the grid interface needed to handle non-manifold topology and grid growth. Three numerical examples demonstrate the possibilities offered by FoamGrid.
|
2404.09765
|
Ashish Devadas Nair
|
Ashish Devadas Nair, Julien Kindle, Plamen Levchev, Davide Scaramuzza
|
Hilti SLAM Challenge 2023: Benchmarking Single + Multi-session SLAM
across Sensor Constellations in Construction
| null | null |
10.1109/LRA.2024.3421791
| null |
cs.RO eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Simultaneous Localization and Mapping systems are a key enabler for
positioning in both handheld and robotic applications. The Hilti SLAM
Challenges organized over the past years have been successful at benchmarking
some of the world's best SLAM Systems with high accuracy. However, more
capabilities of these systems are yet to be explored, such as platform
agnosticism across varying sensor suites and multi-session SLAM. These factors
indirectly serve as an indicator of robustness and ease of deployment in
real-world applications. There exists no dataset plus benchmark combination
publicly available, which considers these factors combined. The Hilti SLAM
Challenge 2023 Dataset and Benchmark addresses this issue. Additionally, we
propose a novel fiducial marker design for a pre-surveyed point on the ground
to be observable from an off-the-shelf LiDAR mounted on a robot, and an
algorithm to estimate its position at mm-level accuracy. Results from the
challenge show an increase in overall participation, single-session SLAM
systems getting increasingly accurate, successfully operating across varying
sensor suites, but relatively few participants performing multi-session SLAM.
Dataset URL: https://www.hilti-challenge.com/dataset-2023.html
|
[
{
"created": "Mon, 15 Apr 2024 13:07:40 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jul 2024 15:09:22 GMT",
"version": "v2"
}
] |
2024-07-31
|
[
[
"Nair",
"Ashish Devadas",
""
],
[
"Kindle",
"Julien",
""
],
[
"Levchev",
"Plamen",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
Simultaneous Localization and Mapping systems are a key enabler for positioning in both handheld and robotic applications. The Hilti SLAM Challenges organized over the past years have been successful at benchmarking some of the world's best SLAM Systems with high accuracy. However, more capabilities of these systems are yet to be explored, such as platform agnosticism across varying sensor suites and multi-session SLAM. These factors indirectly serve as an indicator of robustness and ease of deployment in real-world applications. There exists no dataset plus benchmark combination publicly available, which considers these factors combined. The Hilti SLAM Challenge 2023 Dataset and Benchmark addresses this issue. Additionally, we propose a novel fiducial marker design for a pre-surveyed point on the ground to be observable from an off-the-shelf LiDAR mounted on a robot, and an algorithm to estimate its position at mm-level accuracy. Results from the challenge show an increase in overall participation, single-session SLAM systems getting increasingly accurate, successfully operating across varying sensor suites, but relatively few participants performing multi-session SLAM. Dataset URL: https://www.hilti-challenge.com/dataset-2023.html
|
cs/0510075
|
Mustafa Cenk Gursoy
|
Mustafa Cenk Gursoy, Sergio Verdu, H. Vincent Poor
|
On-Off Frequency-Shift-Keying for Wideband Fading Channels
|
To appear in the EURASIP Journal on Wireless Communications and
Networking
| null | null | null |
cs.IT math.IT
| null |
M-ary On-Off Frequency-Shift-Keying (OOFSK) is a digital modulation format in
which M-ary FSK signaling is overlaid on On/Off keying. This paper investigates
the potential of this modulation format in the context of wideband fading
channels. First it is assumed that the receiver uses energy detection for the
reception of OOFSK signals. Capacity expressions are obtained for the cases in
which the receiver has perfect and imperfect fading side information. Power
efficiency is investigated when the transmitter is subject to a peak-to-average
power ratio (PAR) limitation or a peak power limitation. It is shown that under
a PAR limitation, it is extremely power inefficient to operate in the very low
SNR regime. On the other hand, if there is only a peak power limitation, it is
demonstrated that power efficiency improves as one operates with smaller SNR
and vanishing duty factor. Also studied are the capacity improvements that
accrue when the receiver can track phase shifts in the channel or if the
received signal has a specular component. To take advantage of those features,
the phase of the modulation is also allowed to carry information.
|
[
{
"created": "Mon, 24 Oct 2005 19:50:59 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Gursoy",
"Mustafa Cenk",
""
],
[
"Verdu",
"Sergio",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
M-ary On-Off Frequency-Shift-Keying (OOFSK) is a digital modulation format in which M-ary FSK signaling is overlaid on On/Off keying. This paper investigates the potential of this modulation format in the context of wideband fading channels. First it is assumed that the receiver uses energy detection for the reception of OOFSK signals. Capacity expressions are obtained for the cases in which the receiver has perfect and imperfect fading side information. Power efficiency is investigated when the transmitter is subject to a peak-to-average power ratio (PAR) limitation or a peak power limitation. It is shown that under a PAR limitation, it is extremely power inefficient to operate in the very low SNR regime. On the other hand, if there is only a peak power limitation, it is demonstrated that power efficiency improves as one operates with smaller SNR and vanishing duty factor. Also studied are the capacity improvements that accrue when the receiver can track phase shifts in the channel or if the received signal has a specular component. To take advantage of those features, the phase of the modulation is also allowed to carry information.
|
2007.15109
|
Pasquale Antonante
|
Pasquale Antonante, Vasileios Tzoumas, Heng Yang, Luca Carlone
|
Outlier-Robust Estimation: Hardness, Minimally Tuned Algorithms, and
Applications
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonlinear estimation in robotics and vision is typically plagued with
outliers due to wrong data association, or to incorrect detections from signal
processing and machine learning methods. This paper introduces two unifying
formulations for outlier-robust estimation, Generalized Maximum Consensus
(G-MC) and Generalized Truncated Least Squares (G-TLS), and investigates
fundamental limits, practical algorithms, and applications. Our first
contribution is a proof that outlier-robust estimation is inapproximable: in
the worst case, it is impossible to (even approximately) find the set of
outliers, even with slower-than-polynomial-time algorithms (particularly,
algorithms running in quasi-polynomial time). As a second contribution, we
review and extend two general-purpose algorithms. The first, Adaptive Trimming
(ADAPT), is combinatorial, and is suitable for G-MC; the second, Graduated
Non-Convexity (GNC), is based on homotopy methods, and is suitable for G-TLS.
We extend ADAPT and GNC to the case where the user does not have prior
knowledge of the inlier-noise statistics (or the statistics may vary over time)
and is unable to guess a reasonable threshold to separate inliers from outliers
(as the one commonly used in RANSAC). We propose the first minimally tuned
algorithms for outlier rejection, that dynamically decide how to separate
inliers from outliers. Our third contribution is an evaluation of the proposed
algorithms on robot perception problems: mesh registration, image-based object
detection (shape alignment), and pose graph optimization. ADAPT and GNC execute
in real-time, are deterministic, outperform RANSAC, and are robust up to 80-90%
outliers. Their minimally tuned versions also compare favorably with the state
of the art, even though they do not rely on a noise bound for the inliers.
|
[
{
"created": "Wed, 29 Jul 2020 21:06:13 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jan 2021 20:57:54 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Jul 2021 16:19:31 GMT",
"version": "v3"
}
] |
2021-07-05
|
[
[
"Antonante",
"Pasquale",
""
],
[
"Tzoumas",
"Vasileios",
""
],
[
"Yang",
"Heng",
""
],
[
"Carlone",
"Luca",
""
]
] |
Nonlinear estimation in robotics and vision is typically plagued with outliers due to wrong data association, or to incorrect detections from signal processing and machine learning methods. This paper introduces two unifying formulations for outlier-robust estimation, Generalized Maximum Consensus (G-MC) and Generalized Truncated Least Squares (G-TLS), and investigates fundamental limits, practical algorithms, and applications. Our first contribution is a proof that outlier-robust estimation is inapproximable: in the worst case, it is impossible to (even approximately) find the set of outliers, even with slower-than-polynomial-time algorithms (particularly, algorithms running in quasi-polynomial time). As a second contribution, we review and extend two general-purpose algorithms. The first, Adaptive Trimming (ADAPT), is combinatorial, and is suitable for G-MC; the second, Graduated Non-Convexity (GNC), is based on homotopy methods, and is suitable for G-TLS. We extend ADAPT and GNC to the case where the user does not have prior knowledge of the inlier-noise statistics (or the statistics may vary over time) and is unable to guess a reasonable threshold to separate inliers from outliers (as the one commonly used in RANSAC). We propose the first minimally tuned algorithms for outlier rejection, that dynamically decide how to separate inliers from outliers. Our third contribution is an evaluation of the proposed algorithms on robot perception problems: mesh registration, image-based object detection (shape alignment), and pose graph optimization. ADAPT and GNC execute in real-time, are deterministic, outperform RANSAC, and are robust up to 80-90% outliers. Their minimally tuned versions also compare favorably with the state of the art, even though they do not rely on a noise bound for the inliers.
|
2308.02562
|
Prateek Mittal
|
Prateek Mittal, Puneet Goyal, Joohi Chauhan
|
Food Classification using Joint Representation of Visual and Textual
Data
|
Updated results and discussions to be posted and some sections needed
to be expanded
| null | null | null |
cs.CV cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Food classification is an important task in health care. In this work, we
propose a multimodal classification framework that uses the modified version of
EfficientNet with the Mish activation function for image classification, and
the traditional BERT transformer-based network is used for text classification.
The proposed network and the other state-of-the-art methods are evaluated on a
large open-source dataset, UPMC Food-101. The experimental results show that
the proposed network outperforms the other methods, a significant difference of
11.57% and 6.34% in accuracy is observed for image and text classification,
respectively, when compared with the second-best performing method. We also
compared the performance in terms of accuracy, precision, and recall for text
classification using both machine learning and deep learning-based models. The
comparative analysis from the prediction results of both images and text
demonstrated the efficiency and robustness of the proposed approach.
|
[
{
"created": "Thu, 3 Aug 2023 04:03:46 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Aug 2023 11:47:05 GMT",
"version": "v2"
}
] |
2023-08-31
|
[
[
"Mittal",
"Prateek",
""
],
[
"Goyal",
"Puneet",
""
],
[
"Chauhan",
"Joohi",
""
]
] |
Food classification is an important task in health care. In this work, we propose a multimodal classification framework that uses the modified version of EfficientNet with the Mish activation function for image classification, and the traditional BERT transformer-based network is used for text classification. The proposed network and the other state-of-the-art methods are evaluated on a large open-source dataset, UPMC Food-101. The experimental results show that the proposed network outperforms the other methods, a significant difference of 11.57% and 6.34% in accuracy is observed for image and text classification, respectively, when compared with the second-best performing method. We also compared the performance in terms of accuracy, precision, and recall for text classification using both machine learning and deep learning-based models. The comparative analysis from the prediction results of both images and text demonstrated the efficiency and robustness of the proposed approach.
|
2302.01094
|
Deng Weijian
|
Weijian Deng, Yumin Suh, Stephen Gould, Liang Zheng
|
Confidence and Dispersity Speak: Characterising Prediction Matrix for
Unsupervised Accuracy Estimation
|
This version is not fully edited and will be updated soon
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work aims to assess how well a model performs under distribution shifts
without using labels. While recent methods study prediction confidence, this
work reports prediction dispersity is another informative cue. Confidence
reflects whether the individual prediction is certain; dispersity indicates how
the overall predictions are distributed across all categories. Our key insight
is that a well-performing model should give predictions with high confidence
and high dispersity. That is, we need to consider both properties so as to make
more accurate estimates. To this end, we use the nuclear norm that has been
shown to be effective in characterizing both properties. Extensive experiments
validate the effectiveness of nuclear norm for various models (e.g., ViT and
ConvNeXt), different datasets (e.g., ImageNet and CUB-200), and diverse types
of distribution shifts (e.g., style shift and reproduction shift). We show that
the nuclear norm is more accurate and robust in accuracy estimation than
existing methods. Furthermore, we validate the feasibility of other
measurements (e.g., mutual information maximization) for characterizing
dispersity and confidence. Lastly, we investigate the limitation of the nuclear
norm, study its improved variant under severe class imbalance, and discuss
potential directions.
|
[
{
"created": "Thu, 2 Feb 2023 13:30:48 GMT",
"version": "v1"
}
] |
2023-02-03
|
[
[
"Deng",
"Weijian",
""
],
[
"Suh",
"Yumin",
""
],
[
"Gould",
"Stephen",
""
],
[
"Zheng",
"Liang",
""
]
] |
This work aims to assess how well a model performs under distribution shifts without using labels. While recent methods study prediction confidence, this work reports prediction dispersity is another informative cue. Confidence reflects whether the individual prediction is certain; dispersity indicates how the overall predictions are distributed across all categories. Our key insight is that a well-performing model should give predictions with high confidence and high dispersity. That is, we need to consider both properties so as to make more accurate estimates. To this end, we use the nuclear norm that has been shown to be effective in characterizing both properties. Extensive experiments validate the effectiveness of nuclear norm for various models (e.g., ViT and ConvNeXt), different datasets (e.g., ImageNet and CUB-200), and diverse types of distribution shifts (e.g., style shift and reproduction shift). We show that the nuclear norm is more accurate and robust in accuracy estimation than existing methods. Furthermore, we validate the feasibility of other measurements (e.g., mutual information maximization) for characterizing dispersity and confidence. Lastly, we investigate the limitation of the nuclear norm, study its improved variant under severe class imbalance, and discuss potential directions.
|
2210.03372
|
Yuanhao Ban
|
Yuanhao Ban, Yinpeng Dong
|
Pre-trained Adversarial Perturbations
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Self-supervised pre-training has drawn increasing attention in recent years
due to its superior performance on numerous downstream tasks after fine-tuning.
However, it is well-known that deep learning models lack the robustness to
adversarial examples, which can also invoke security issues to pre-trained
models, despite being less explored. In this paper, we delve into the
robustness of pre-trained models by introducing Pre-trained Adversarial
Perturbations (PAPs), which are universal perturbations crafted for the
pre-trained models to maintain the effectiveness when attacking fine-tuned ones
without any knowledge of the downstream tasks. To this end, we propose a
Low-Level Layer Lifting Attack (L4A) method to generate effective PAPs by
lifting the neuron activations of low-level layers of the pre-trained models.
Equipped with an enhanced noise augmentation strategy, L4A is effective at
generating more transferable PAPs against fine-tuned models. Extensive
experiments on typical pre-trained vision models and ten downstream tasks
demonstrate that our method improves the attack success rate by a large margin
compared with state-of-the-art methods.
|
[
{
"created": "Fri, 7 Oct 2022 07:28:03 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2022 12:37:24 GMT",
"version": "v2"
}
] |
2022-10-17
|
[
[
"Ban",
"Yuanhao",
""
],
[
"Dong",
"Yinpeng",
""
]
] |
Self-supervised pre-training has drawn increasing attention in recent years due to its superior performance on numerous downstream tasks after fine-tuning. However, it is well-known that deep learning models lack the robustness to adversarial examples, which can also invoke security issues to pre-trained models, despite being less explored. In this paper, we delve into the robustness of pre-trained models by introducing Pre-trained Adversarial Perturbations (PAPs), which are universal perturbations crafted for the pre-trained models to maintain the effectiveness when attacking fine-tuned ones without any knowledge of the downstream tasks. To this end, we propose a Low-Level Layer Lifting Attack (L4A) method to generate effective PAPs by lifting the neuron activations of low-level layers of the pre-trained models. Equipped with an enhanced noise augmentation strategy, L4A is effective at generating more transferable PAPs against fine-tuned models. Extensive experiments on typical pre-trained vision models and ten downstream tasks demonstrate that our method improves the attack success rate by a large margin compared with state-of-the-art methods.
|
2105.04593
|
Lening Li
|
Lening Li and Jie Fu
|
Policy Synthesis for Metric Interval Temporal Logic with Probabilistic
Distributions
|
7 pages, 2 figures, submitted to The 60th IEEE conference on Decision
and Control
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Metric Temporal Logic can express temporally evolving properties with
time-critical constraints or time-triggered constraints for real-time systems.
This paper extends the Metric Interval Temporal Logic with a distribution
eventuality operator to express time-sensitive missions for a system
interacting with a dynamic, probabilistic environment. This formalism enables
us to describe the probabilistic occurrences of random external events as part
of the task specification and event-triggered temporal constraints for the
intended system's behavior. The main contributions of this paper are two folds:
First, we propose a procedure to translate a specification into a stochastic
timed automaton. Second, we develop an approximate-optimal probabilistic
planning problem for synthesizing the control policy that maximizes the
probability for the planning agent to achieve the task, provided that the
external events satisfy the specification. The planning algorithm employs a
truncation in the clocks for the timed automaton to reduce the planning in a
countably infinite state space to a finite state space with a bounded error
guarantee. We illustrate the method with a robot motion planning example.
|
[
{
"created": "Mon, 10 May 2021 18:17:12 GMT",
"version": "v1"
}
] |
2021-05-12
|
[
[
"Li",
"Lening",
""
],
[
"Fu",
"Jie",
""
]
] |
Metric Temporal Logic can express temporally evolving properties with time-critical constraints or time-triggered constraints for real-time systems. This paper extends the Metric Interval Temporal Logic with a distribution eventuality operator to express time-sensitive missions for a system interacting with a dynamic, probabilistic environment. This formalism enables us to describe the probabilistic occurrences of random external events as part of the task specification and event-triggered temporal constraints for the intended system's behavior. The main contributions of this paper are two folds: First, we propose a procedure to translate a specification into a stochastic timed automaton. Second, we develop an approximate-optimal probabilistic planning problem for synthesizing the control policy that maximizes the probability for the planning agent to achieve the task, provided that the external events satisfy the specification. The planning algorithm employs a truncation in the clocks for the timed automaton to reduce the planning in a countably infinite state space to a finite state space with a bounded error guarantee. We illustrate the method with a robot motion planning example.
|
1207.1701
|
Sugata Sanyal
|
Preetida Vinayakray-Jani, Sugata Sanyal
|
Security Architecture for Cluster based Ad Hoc Networks
|
5 pages, 6 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile Ad hoc Networks (MANETs) are subject to various kinds of attacks.
Deploying security mechanisms is difficult due to inherent properties of ad hoc
networks, such as the high dynamics of their topology, restricted bandwidth,
and limited resources in end device. With such dynamicity in connectivity and
limited resources it is not possible to deploy centralized security solution
but distribution solution. The paper proposes architectural security concept in
distributed manner where network is divided into clusters with one cluster head
node each. This cluster head node also act as a router providing proactive
hidden routing by using Steganographic methods for inter-cluster security.
Besides cipher method is used to provide intra-cluster security. The proposed
secure architecture specifies operational view of cluster head as a router that
provides trust, anonymity and confidentiality through Steganography and
Cryptography respectively.
|
[
{
"created": "Fri, 6 Jul 2012 18:25:32 GMT",
"version": "v1"
}
] |
2012-07-09
|
[
[
"Vinayakray-Jani",
"Preetida",
""
],
[
"Sanyal",
"Sugata",
""
]
] |
Mobile Ad hoc Networks (MANETs) are subject to various kinds of attacks. Deploying security mechanisms is difficult due to inherent properties of ad hoc networks, such as the high dynamics of their topology, restricted bandwidth, and limited resources in end device. With such dynamicity in connectivity and limited resources it is not possible to deploy centralized security solution but distribution solution. The paper proposes architectural security concept in distributed manner where network is divided into clusters with one cluster head node each. This cluster head node also act as a router providing proactive hidden routing by using Steganographic methods for inter-cluster security. Besides cipher method is used to provide intra-cluster security. The proposed secure architecture specifies operational view of cluster head as a router that provides trust, anonymity and confidentiality through Steganography and Cryptography respectively.
|
1405.7601
|
Nicola Cufaro Petroni
|
Nicola Cufaro Petroni
|
Entropy and its discontents: A note on definitions
|
18 pages, 7 figures; minor modifications required by referees; 1
reference and Acknowledgements added
|
Entropy (2014), 16, 4044-4059
|
10.3390/e16074044
| null |
cs.IT math.IT physics.comp-ph physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The routine definitions of both entropy, and differential entropy show
inconsistencies that make them not reciprocally coherent. We propose a few
possible modifications of these quantities so that 1) they no longer show
incongruities, 2) they go one into the other in a suitable limit as the result
of a renormalization. The properties of the new quantities would slightly
differ from that of the usual entropies in a few other respects
|
[
{
"created": "Thu, 29 May 2014 16:14:26 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Aug 2014 10:13:27 GMT",
"version": "v2"
}
] |
2014-08-08
|
[
[
"Petroni",
"Nicola Cufaro",
""
]
] |
The routine definitions of both entropy, and differential entropy show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that 1) they no longer show incongruities, 2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects
|
2209.01232
|
Wenya Wang
|
Wenya Wang, Vivek Srikumar, Hanna Hajishirzi, Noah A. Smith
|
Elaboration-Generating Commonsense Question Answering at Scale
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In question answering requiring common sense, language models (e.g., GPT-3)
have been used to generate text expressing background knowledge that helps
improve performance. Yet the cost of working with such models is very high; in
this work, we finetune smaller language models to generate useful intermediate
context, referred to here as elaborations. Our framework alternates between
updating two language models -- an elaboration generator and an answer
predictor -- allowing each to influence the other. Using less than 0.5% of the
parameters of GPT-3, our model outperforms alternatives with similar sizes and
closes the gap on GPT-3 on four commonsense question answering benchmarks.
Human evaluations show that the quality of the generated elaborations is high.
|
[
{
"created": "Fri, 2 Sep 2022 18:32:09 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jul 2023 21:43:36 GMT",
"version": "v2"
}
] |
2023-07-18
|
[
[
"Wang",
"Wenya",
""
],
[
"Srikumar",
"Vivek",
""
],
[
"Hajishirzi",
"Hanna",
""
],
[
"Smith",
"Noah A.",
""
]
] |
In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge that helps improve performance. Yet the cost of working with such models is very high; in this work, we finetune smaller language models to generate useful intermediate context, referred to here as elaborations. Our framework alternates between updating two language models -- an elaboration generator and an answer predictor -- allowing each to influence the other. Using less than 0.5% of the parameters of GPT-3, our model outperforms alternatives with similar sizes and closes the gap on GPT-3 on four commonsense question answering benchmarks. Human evaluations show that the quality of the generated elaborations is high.
|
2305.14489
|
Nghia T. Le
|
Nghia T. Le, Alan Ritter
|
Are Large Language Models Robust Coreference Resolvers?
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent work on extending coreference resolution across domains and languages
relies on annotated data in both the target domain and language. At the same
time, pre-trained large language models (LMs) have been reported to exhibit
strong zero- and few-shot learning abilities across a wide range of NLP tasks.
However, prior work mostly studied this ability using artificial sentence-level
datasets such as the Winograd Schema Challenge. In this paper, we assess the
feasibility of prompt-based coreference resolution by evaluating
instruction-tuned language models on difficult, linguistically-complex
coreference benchmarks (e.g., CoNLL-2012). We show that prompting for
coreference can outperform current unsupervised coreference systems, although
this approach appears to be reliant on high-quality mention detectors. Further
investigations reveal that instruction-tuned LMs generalize surprisingly well
across domains, languages, and time periods; yet continued fine-tuning of
neural models should still be preferred if small amounts of annotated examples
are available.
|
[
{
"created": "Tue, 23 May 2023 19:38:28 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Nov 2023 04:51:27 GMT",
"version": "v2"
}
] |
2023-11-16
|
[
[
"Le",
"Nghia T.",
""
],
[
"Ritter",
"Alan",
""
]
] |
Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language. At the same time, pre-trained large language models (LMs) have been reported to exhibit strong zero- and few-shot learning abilities across a wide range of NLP tasks. However, prior work mostly studied this ability using artificial sentence-level datasets such as the Winograd Schema Challenge. In this paper, we assess the feasibility of prompt-based coreference resolution by evaluating instruction-tuned language models on difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We show that prompting for coreference can outperform current unsupervised coreference systems, although this approach appears to be reliant on high-quality mention detectors. Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods; yet continued fine-tuning of neural models should still be preferred if small amounts of annotated examples are available.
|
2211.01768
|
L Siddharth Mr
|
Guangtong Li, L Siddharth, Jianxi Luo
|
Embedding Knowledge Graph of Patent Metadata to Measure Knowledge
Proximity
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge proximity refers to the strength of association between any two
entities in a structural form that embodies certain aspects of a knowledge
base. In this work, we operationalize knowledge proximity within the context of
the US Patent Database (knowledge base) using a knowledge graph (structural
form) named PatNet built using patent metadata, including citations, inventors,
assignees, and domain classifications. We train various graph embedding models
using PatNet to obtain the embeddings of entities and relations. The cosine
similarity between the corresponding (or transformed) embeddings of entities
denotes the knowledge proximity between these. We compare the embedding models
in terms of their performances in predicting target entities and explaining
domain expansion profiles of inventors and assignees. We then apply the
embeddings of the best-preferred model to associate homogeneous (e.g.,
patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
|
[
{
"created": "Thu, 3 Nov 2022 12:48:25 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Dec 2022 06:47:50 GMT",
"version": "v2"
}
] |
2022-12-13
|
[
[
"Li",
"Guangtong",
""
],
[
"Siddharth",
"L",
""
],
[
"Luo",
"Jianxi",
""
]
] |
Knowledge proximity refers to the strength of association between any two entities in a structural form that embodies certain aspects of a knowledge base. In this work, we operationalize knowledge proximity within the context of the US Patent Database (knowledge base) using a knowledge graph (structural form) named PatNet built using patent metadata, including citations, inventors, assignees, and domain classifications. We train various graph embedding models using PatNet to obtain the embeddings of entities and relations. The cosine similarity between the corresponding (or transformed) embeddings of entities denotes the knowledge proximity between these. We compare the embedding models in terms of their performances in predicting target entities and explaining domain expansion profiles of inventors and assignees. We then apply the embeddings of the best-preferred model to associate homogeneous (e.g., patent-patent) and heterogeneous (e.g., inventor-assignee) pairs of entities.
|
1104.3810
|
Juha K\"arkk\"ainen
|
Juha K\"arkk\"ainen and Simon J. Puglisi
|
Fixed Block Compression Boosting in FM-Indexes
| null | null | null | null |
cs.DS cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A compressed full-text self-index occupies space close to that of the
compressed text and simultaneously allows fast pattern matching and random
access to the underlying text. Among the best compressed self-indexes, in
theory and in practice, are several members of the FM-index family. In this
paper, we describe new FM-index variants that combine nice theoretical
properties, simple implementation and improved practical performance. Our main
result is a new technique called fixed block compression boosting, which is a
simpler and faster alternative to optimal compression boosting and implicit
compression boosting used in previous FM-indexes.
|
[
{
"created": "Tue, 19 Apr 2011 17:26:46 GMT",
"version": "v1"
}
] |
2011-04-20
|
[
[
"Kärkkäinen",
"Juha",
""
],
[
"Puglisi",
"Simon J.",
""
]
] |
A compressed full-text self-index occupies space close to that of the compressed text and simultaneously allows fast pattern matching and random access to the underlying text. Among the best compressed self-indexes, in theory and in practice, are several members of the FM-index family. In this paper, we describe new FM-index variants that combine nice theoretical properties, simple implementation and improved practical performance. Our main result is a new technique called fixed block compression boosting, which is a simpler and faster alternative to optimal compression boosting and implicit compression boosting used in previous FM-indexes.
|
2405.04441
|
Paola Soto
|
Paola Soto, Miguel Camelo, Danny De Vleeschauwer, Yorick De Bock, Nina
Slamnik-Krije\v{s}torac, Chia-Yu Chang, Natalia Gaviria, Erik Mannens, Juan
F. Botero, Steven Latr\'e
|
Designing, Developing, and Validating Network Intelligence for Scaling
in Service-Based Architectures based on Deep Reinforcement Learning
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automating network processes without human intervention is crucial for the
complex 6G environment. This requires zero-touch management and orchestration,
the integration of Network Intelligence (NI) into the network architecture, and
the efficient lifecycle management of intelligent functions. Reinforcement
Learning (RL) plays a key role in this context, offering intelligent
decision-making capabilities suited to networks' dynamic nature. Despite its
potential, integrating RL poses challenges in model development and
application. To tackle those issues, we delve into designing, developing, and
validating RL algorithms for scaling network functions in service-based network
architectures such as Open Radio Access Network (O-RAN). It builds upon and
expands previous research on RL lifecycle management by proposing several RL
algorithms and Reward Functions (RFns). Our proposed methodology is anchored on
a dual approach: firstly, it evaluates the training performance of these
algorithms under varying RFns, and secondly, it validates their performance
after being trained to discern the practical applicability in real-world
settings. We show that, despite significant progress, the development stage of
RL techniques for networking applications, particularly in scaling scenarios,
still leaves room for significant improvements. This study underscores the
importance of ongoing research and development to enhance the practicality and
resilience of RL techniques in real-world networking environments.
|
[
{
"created": "Tue, 7 May 2024 16:05:06 GMT",
"version": "v1"
}
] |
2024-05-08
|
[
[
"Soto",
"Paola",
""
],
[
"Camelo",
"Miguel",
""
],
[
"De Vleeschauwer",
"Danny",
""
],
[
"De Bock",
"Yorick",
""
],
[
"Slamnik-Kriještorac",
"Nina",
""
],
[
"Chang",
"Chia-Yu",
""
],
[
"Gaviria",
"Natalia",
""
],
[
"Mannens",
"Erik",
""
],
[
"Botero",
"Juan F.",
""
],
[
"Latré",
"Steven",
""
]
] |
Automating network processes without human intervention is crucial for the complex 6G environment. This requires zero-touch management and orchestration, the integration of Network Intelligence (NI) into the network architecture, and the efficient lifecycle management of intelligent functions. Reinforcement Learning (RL) plays a key role in this context, offering intelligent decision-making capabilities suited to networks' dynamic nature. Despite its potential, integrating RL poses challenges in model development and application. To tackle those issues, we delve into designing, developing, and validating RL algorithms for scaling network functions in service-based network architectures such as Open Radio Access Network (O-RAN). It builds upon and expands previous research on RL lifecycle management by proposing several RL algorithms and Reward Functions (RFns). Our proposed methodology is anchored on a dual approach: firstly, it evaluates the training performance of these algorithms under varying RFns, and secondly, it validates their performance after being trained to discern the practical applicability in real-world settings. We show that, despite significant progress, the development stage of RL techniques for networking applications, particularly in scaling scenarios, still leaves room for significant improvements. This study underscores the importance of ongoing research and development to enhance the practicality and resilience of RL techniques in real-world networking environments.
|
2402.01667
|
Rolin Gabriel RASOANAIVO
|
R\^olin Gabriel Rasoanaivo (IRIT, UT Capitole), Pascale Zarat\'e
(IRIT, UT Capitole, IRIT-ADRIA)
|
Students' accommodation allocation: A Multicriteria Decision Support
System
| null |
SADIO electronic journal of informatics and operations research,
2023, 22 (3)
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The social life of students at university has an impact on their educational
success. The allocation of accommodation is part of this aspect. This article
presents our proposal to improve students' allocation accommodation. We aim to
support university administrative departments for the selection of students for
housing. Therefore, we propose a decision support system based on
multi-criteria decision support methods. To calculate the weights of the
criteria, we use the AHP method. Then, to rank the students, AHP, Weighted Sum
Method and PROMETHEE methods are used. The aim is to find the most adequate
method to rank the students. The result is achieved because the AHP is able to
calculate the weight of criteria and the AHP, SWM and PROMETHEE are able to
rank the students.
|
[
{
"created": "Mon, 15 Jan 2024 14:52:17 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Rasoanaivo",
"Rôlin Gabriel",
"",
"IRIT, UT Capitole"
],
[
"Zaraté",
"Pascale",
"",
"IRIT, UT Capitole, IRIT-ADRIA"
]
] |
The social life of students at university has an impact on their educational success. The allocation of accommodation is part of this aspect. This article presents our proposal to improve students' allocation accommodation. We aim to support university administrative departments for the selection of students for housing. Therefore, we propose a decision support system based on multi-criteria decision support methods. To calculate the weights of the criteria, we use the AHP method. Then, to rank the students, AHP, Weighted Sum Method and PROMETHEE methods are used. The aim is to find the most adequate method to rank the students. The result is achieved because the AHP is able to calculate the weight of criteria and the AHP, SWM and PROMETHEE are able to rank the students.
|
1809.07256
|
Romain Hennequin
|
Romain Hennequin and Jimena Royo-Letelier and Manuel Moussallam
|
Audio Based Disambiguation Of Music Genre Tags
|
published in ISMIR 2018
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose to infer music genre embeddings from audio datasets
carrying semantic information about genres. We show that such embeddings can be
used for disambiguating genre tags (identification of different labels for the
same genre, tag translation from a tag system to another, inference of
hierarchical taxonomies on these genre tags). These embeddings are built by
training a deep convolutional neural network genre classifier with large audio
datasets annotated with a flat tag system. We show empirically that they makes
it possible to retrieve the original taxonomy of a tag system, spot duplicates
tags and translate tags from a tag system to another.
|
[
{
"created": "Wed, 19 Sep 2018 15:49:12 GMT",
"version": "v1"
}
] |
2018-09-20
|
[
[
"Hennequin",
"Romain",
""
],
[
"Royo-Letelier",
"Jimena",
""
],
[
"Moussallam",
"Manuel",
""
]
] |
In this paper, we propose to infer music genre embeddings from audio datasets carrying semantic information about genres. We show that such embeddings can be used for disambiguating genre tags (identification of different labels for the same genre, tag translation from a tag system to another, inference of hierarchical taxonomies on these genre tags). These embeddings are built by training a deep convolutional neural network genre classifier with large audio datasets annotated with a flat tag system. We show empirically that they makes it possible to retrieve the original taxonomy of a tag system, spot duplicates tags and translate tags from a tag system to another.
|
1912.13139
|
Yuwen Huang
|
Yuwen Huang, Yuan Liu and Fangjiong Chen
|
NOMA-Aided Mobile Edge Computing via User Cooperation
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploiting the idle computation resources of mobile devices in mobile edge
computing (MEC) system can achieve both channel diversity and computing
diversity as mobile devices can offload their computation tasks to nearby
mobile devices in addition to MEC server embedded access point (AP). In this
paper, we propose a non-orthogonal multiple-access (NOMA)-aided cooperative
computing scheme in a basic three-node MEC system consisting of a user, a
helper, and an AP. In particular, we assume that the user can simultaneously
offload data to the helper and the AP using NOMA, while the helper can locally
compute data and offload data to the AP at the same time. We study two
optimization problems, energy consumption minimization and offloading data
maximization, by joint communication and computation resource allocation of the
user and helper. We find the optimal solutions for the two non-convex problems
by some proper mathematical methods. Simulation results are presented to
demonstrate the effectiveness of the proposed schemes. Some useful insights are
provided for practical designs.
|
[
{
"created": "Tue, 31 Dec 2019 01:51:32 GMT",
"version": "v1"
}
] |
2020-01-01
|
[
[
"Huang",
"Yuwen",
""
],
[
"Liu",
"Yuan",
""
],
[
"Chen",
"Fangjiong",
""
]
] |
Exploiting the idle computation resources of mobile devices in mobile edge computing (MEC) system can achieve both channel diversity and computing diversity as mobile devices can offload their computation tasks to nearby mobile devices in addition to MEC server embedded access point (AP). In this paper, we propose a non-orthogonal multiple-access (NOMA)-aided cooperative computing scheme in a basic three-node MEC system consisting of a user, a helper, and an AP. In particular, we assume that the user can simultaneously offload data to the helper and the AP using NOMA, while the helper can locally compute data and offload data to the AP at the same time. We study two optimization problems, energy consumption minimization and offloading data maximization, by joint communication and computation resource allocation of the user and helper. We find the optimal solutions for the two non-convex problems by some proper mathematical methods. Simulation results are presented to demonstrate the effectiveness of the proposed schemes. Some useful insights are provided for practical designs.
|
1910.12336
|
Patrick Schwab
|
Patrick Schwab, Walter Karlen
|
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
|
To appear in Advances in Neural Information Processing Systems 2019
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature importance estimates that inform users about the degree to which
given inputs influence the output of a predictive model are crucial for
understanding, validating, and interpreting machine-learning models. However,
providing fast and accurate estimates of feature importance for
high-dimensional data, and quantifying the uncertainty of such estimates remain
open challenges. Here, we frame the task of providing explanations for the
decisions of machine-learning models as a causal learning task, and train
causal explanation (CXPlain) models that learn to estimate to what degree
certain inputs cause outputs in another machine-learning model. CXPlain can,
once trained, be used to explain the target model in little time, and enables
the quantification of the uncertainty associated with its feature importance
estimates via bootstrap ensembling. We present experiments that demonstrate
that CXPlain is significantly more accurate and faster than existing
model-agnostic methods for estimating feature importance. In addition, we
confirm that the uncertainty estimates provided by CXPlain ensembles are
strongly correlated with their ability to accurately estimate feature
importance on held-out data.
|
[
{
"created": "Sun, 27 Oct 2019 19:59:18 GMT",
"version": "v1"
}
] |
2019-10-29
|
[
[
"Schwab",
"Patrick",
""
],
[
"Karlen",
"Walter",
""
]
] |
Feature importance estimates that inform users about the degree to which given inputs influence the output of a predictive model are crucial for understanding, validating, and interpreting machine-learning models. However, providing fast and accurate estimates of feature importance for high-dimensional data, and quantifying the uncertainty of such estimates remain open challenges. Here, we frame the task of providing explanations for the decisions of machine-learning models as a causal learning task, and train causal explanation (CXPlain) models that learn to estimate to what degree certain inputs cause outputs in another machine-learning model. CXPlain can, once trained, be used to explain the target model in little time, and enables the quantification of the uncertainty associated with its feature importance estimates via bootstrap ensembling. We present experiments that demonstrate that CXPlain is significantly more accurate and faster than existing model-agnostic methods for estimating feature importance. In addition, we confirm that the uncertainty estimates provided by CXPlain ensembles are strongly correlated with their ability to accurately estimate feature importance on held-out data.
|
1212.3162
|
Aaron Gerow
|
Aaron Gerow and Khurshid Ahmad
|
Diachronic Variation in Grammatical Relations
| null |
Proceedings of the 24th International Conference on Computational
Linguistics (COLING 2012), Mumbai, India
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method of finding and analyzing shifts in grammatical relations
found in diachronic corpora. Inspired by the econometric technique of measuring
return and volatility instead of relative frequencies, we propose them as a way
to better characterize changes in grammatical patterns like nominalization,
modification and comparison. To exemplify the use of these techniques, we
examine a corpus of NIPS papers and report trends which manifest at the token,
part-of-speech and grammatical levels. Building up from frequency observations
to a second-order analysis, we show that shifts in frequencies overlook deeper
trends in language, even when part-of-speech information is included. Examining
token, POS and grammatical levels of variation enables a summary view of
diachronic text as a whole. We conclude with a discussion about how these
methods can inform intuitions about specialist domains as well as changes in
language use as a whole.
|
[
{
"created": "Thu, 13 Dec 2012 13:00:55 GMT",
"version": "v1"
}
] |
2012-12-14
|
[
[
"Gerow",
"Aaron",
""
],
[
"Ahmad",
"Khurshid",
""
]
] |
We present a method of finding and analyzing shifts in grammatical relations found in diachronic corpora. Inspired by the econometric technique of measuring return and volatility instead of relative frequencies, we propose them as a way to better characterize changes in grammatical patterns like nominalization, modification and comparison. To exemplify the use of these techniques, we examine a corpus of NIPS papers and report trends which manifest at the token, part-of-speech and grammatical levels. Building up from frequency observations to a second-order analysis, we show that shifts in frequencies overlook deeper trends in language, even when part-of-speech information is included. Examining token, POS and grammatical levels of variation enables a summary view of diachronic text as a whole. We conclude with a discussion about how these methods can inform intuitions about specialist domains as well as changes in language use as a whole.
|
1409.5166
|
Hu Qin
|
Hu Qin, Zizhen Zhang, Yubin Xie, Andrew Lim
|
A Tabu Search Algorithm for the Multi-period Inspector Scheduling
Problem
| null | null | null | null |
cs.AI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a multi-period inspector scheduling problem (MPISP),
which is a new variant of the multi-trip vehicle routing problem with time
windows (VRPTW). In the MPISP, each inspector is scheduled to perform a route
in a given multi-period planning horizon. At the end of each period, each
inspector is not required to return to the depot but has to stay at one of the
vertices for recuperation. If the remaining time of the current period is
insufficient for an inspector to travel from his/her current vertex $A$ to a
certain vertex B, he/she can choose either waiting at vertex A until the start
of the next period or traveling to a vertex C that is closer to vertex B.
Therefore, the shortest transit time between any vertex pair is affected by the
length of the period and the departure time. We first describe an approach of
computing the shortest transit time between any pair of vertices with an
arbitrary departure time. To solve the MPISP, we then propose several local
search operators adapted from classical operators for the VRPTW and integrate
them into a tabu search framework. In addition, we present a constrained
knapsack model that is able to produce an upper bound for the problem. Finally,
we evaluate the effectiveness of our algorithm with extensive experiments based
on a set of test instances. Our computational results indicate that our
approach generates high-quality solutions.
|
[
{
"created": "Wed, 17 Sep 2014 23:29:46 GMT",
"version": "v1"
}
] |
2014-09-19
|
[
[
"Qin",
"Hu",
""
],
[
"Zhang",
"Zizhen",
""
],
[
"Xie",
"Yubin",
""
],
[
"Lim",
"Andrew",
""
]
] |
This paper introduces a multi-period inspector scheduling problem (MPISP), which is a new variant of the multi-trip vehicle routing problem with time windows (VRPTW). In the MPISP, each inspector is scheduled to perform a route in a given multi-period planning horizon. At the end of each period, each inspector is not required to return to the depot but has to stay at one of the vertices for recuperation. If the remaining time of the current period is insufficient for an inspector to travel from his/her current vertex $A$ to a certain vertex B, he/she can choose either waiting at vertex A until the start of the next period or traveling to a vertex C that is closer to vertex B. Therefore, the shortest transit time between any vertex pair is affected by the length of the period and the departure time. We first describe an approach of computing the shortest transit time between any pair of vertices with an arbitrary departure time. To solve the MPISP, we then propose several local search operators adapted from classical operators for the VRPTW and integrate them into a tabu search framework. In addition, we present a constrained knapsack model that is able to produce an upper bound for the problem. Finally, we evaluate the effectiveness of our algorithm with extensive experiments based on a set of test instances. Our computational results indicate that our approach generates high-quality solutions.
|
1512.05256
|
Naveen Sivadasan Dr
|
Kanigalpula Samanvi and Naveen Sivadasan
|
Subgraph Similarity Search in Large Graphs
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the major challenges in applications related to social networks,
computational biology, collaboration networks etc., is to efficiently search
for similar patterns in their underlying graphs. These graphs are typically
noisy and contain thousands of vertices and millions of edges. In many cases,
the graphs are unlabeled and the notion of similarity is also not well defined.
We study the problem of searching an induced subgraph in a large target graph
that is most similar to the given query graph. We assume that the query graph
and target graph are undirected and unlabeled. We use graphlet kernels
\cite{shervashidze2009efficient} to define graph similarity. Graphlet kernels
are known to perform better than other kernels in different applications.
Our algorithm maps topological neighborhood information of vertices in the
query and target graphs to vectors. These local topological informations are
then combined to find a target subgraph having highly similar global topology
with the given query graph. We tested our algorithm on several real world
networks such as facebook network, google plus network, youtube network, amazon
network etc. Most of them contain thousands of vertices and million edges. Our
algorithm is able to detect highly similar matches when queried in these
networks. Our multi-threaded implementation takes about one second to find the
match on a 32 core machine, excluding the time for one time preprocessing.
Computationally expensive parts of our algorithm can be further scaled to
standard parallel and distributed frameworks like map-reduce.
|
[
{
"created": "Wed, 16 Dec 2015 17:22:40 GMT",
"version": "v1"
}
] |
2015-12-17
|
[
[
"Samanvi",
"Kanigalpula",
""
],
[
"Sivadasan",
"Naveen",
""
]
] |
One of the major challenges in applications related to social networks, computational biology, collaboration networks etc., is to efficiently search for similar patterns in their underlying graphs. These graphs are typically noisy and contain thousands of vertices and millions of edges. In many cases, the graphs are unlabeled and the notion of similarity is also not well defined. We study the problem of searching an induced subgraph in a large target graph that is most similar to the given query graph. We assume that the query graph and target graph are undirected and unlabeled. We use graphlet kernels \cite{shervashidze2009efficient} to define graph similarity. Graphlet kernels are known to perform better than other kernels in different applications. Our algorithm maps topological neighborhood information of vertices in the query and target graphs to vectors. These local topological informations are then combined to find a target subgraph having highly similar global topology with the given query graph. We tested our algorithm on several real world networks such as facebook network, google plus network, youtube network, amazon network etc. Most of them contain thousands of vertices and million edges. Our algorithm is able to detect highly similar matches when queried in these networks. Our multi-threaded implementation takes about one second to find the match on a 32 core machine, excluding the time for one time preprocessing. Computationally expensive parts of our algorithm can be further scaled to standard parallel and distributed frameworks like map-reduce.
|
2008.10932
|
Kathrin Hanauer
|
Kathrin Hanauer, Christian Schulz, Jonathan Trummer
|
O'Reach: Even Faster Reachability in Large Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most fundamental problems in computer science is the reachability
problem: Given a directed graph and two vertices s and t, can s reach t via a
path? We revisit existing techniques and combine them with new approaches to
support a large portion of reachability queries in constant time using a
linear-sized reachability index. Our new algorithm O'Reach can be easily
combined with previously developed solutions for the problem or run standalone.
In a detailed experimental study, we compare a variety of algorithms with
respect to their index-building and query times as well as their memory
footprint on a diverse set of instances. Our experiments indicate that the
query performance often depends strongly not only on the type of graph, but
also on the result, i.e., reachable or unreachable. Furthermore, we show that
previous algorithms are significantly sped up when combined with our new
approach in almost all scenarios. Surprisingly, due to cache effects, a higher
investment in space doesn't necessarily pay off: Reachability queries can often
be answered even faster than single memory accesses in a precomputed full
reachability matrix.
|
[
{
"created": "Tue, 25 Aug 2020 10:34:55 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Feb 2021 17:58:31 GMT",
"version": "v2"
}
] |
2021-02-02
|
[
[
"Hanauer",
"Kathrin",
""
],
[
"Schulz",
"Christian",
""
],
[
"Trummer",
"Jonathan",
""
]
] |
One of the most fundamental problems in computer science is the reachability problem: Given a directed graph and two vertices s and t, can s reach t via a path? We revisit existing techniques and combine them with new approaches to support a large portion of reachability queries in constant time using a linear-sized reachability index. Our new algorithm O'Reach can be easily combined with previously developed solutions for the problem or run standalone. In a detailed experimental study, we compare a variety of algorithms with respect to their index-building and query times as well as their memory footprint on a diverse set of instances. Our experiments indicate that the query performance often depends strongly not only on the type of graph, but also on the result, i.e., reachable or unreachable. Furthermore, we show that previous algorithms are significantly sped up when combined with our new approach in almost all scenarios. Surprisingly, due to cache effects, a higher investment in space doesn't necessarily pay off: Reachability queries can often be answered even faster than single memory accesses in a precomputed full reachability matrix.
|
1509.01706
|
Jing Zhang
|
Jing Zhang and Ioannis Ch. Paschalidis
|
An Improved Composite Hypothesis Test for Markov Models with
Applications in Network Anomaly Detection
|
6 pages, 6 figures; final version for CDC 2015
| null | null | null |
cs.IT cs.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has proposed the use of a composite hypothesis Hoeffding test for
statistical anomaly detection. Setting an appropriate threshold for the test
given a desired false alarm probability involves approximating the false alarm
probability. To that end, a large deviations asymptotic is typically used
which, however, often results in an inaccurate setting of the threshold,
especially for relatively small sample sizes. This, in turn, results in an
anomaly detection test that does not control well for false alarms. In this
paper, we develop a tighter approximation using the Central Limit Theorem (CLT)
under Markovian assumptions. We apply our result to a network anomaly detection
application and demonstrate its advantages over earlier work.
|
[
{
"created": "Sat, 5 Sep 2015 15:03:12 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Mar 2016 18:22:29 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Sep 2016 18:14:09 GMT",
"version": "v3"
}
] |
2016-09-19
|
[
[
"Zhang",
"Jing",
""
],
[
"Paschalidis",
"Ioannis Ch.",
""
]
] |
Recent work has proposed the use of a composite hypothesis Hoeffding test for statistical anomaly detection. Setting an appropriate threshold for the test given a desired false alarm probability involves approximating the false alarm probability. To that end, a large deviations asymptotic is typically used which, however, often results in an inaccurate setting of the threshold, especially for relatively small sample sizes. This, in turn, results in an anomaly detection test that does not control well for false alarms. In this paper, we develop a tighter approximation using the Central Limit Theorem (CLT) under Markovian assumptions. We apply our result to a network anomaly detection application and demonstrate its advantages over earlier work.
|
2405.06201
|
Hao Lu
|
Jiyao Wang, Hao Lu, Ange Wang, Xiao Yang, Yingcong Chen, Dengbo He,
Kaishun Wu
|
PhysMLE: Generalizable and Priors-Inclusive Multi-task Remote
Physiological Measurement
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Remote photoplethysmography (rPPG) has been widely applied to measure heart
rate from face videos. To increase the generalizability of the algorithms,
domain generalization (DG) attracted increasing attention in rPPG. However,
when rPPG is extended to simultaneously measure more vital signs (e.g.,
respiration and blood oxygen saturation), achieving generalizability brings new
challenges. Although partial features shared among different physiological
signals can benefit multi-task learning, the sparse and imbalanced target label
space brings the seesaw effect over task-specific feature learning. To resolve
this problem, we designed an end-to-end Mixture of Low-rank Experts for
multi-task remote Physiological measurement (PhysMLE), which is based on
multiple low-rank experts with a novel router mechanism, thereby enabling the
model to adeptly handle both specifications and correlations within tasks.
Additionally, we introduced prior knowledge from physiology among tasks to
overcome the imbalance of label space under real-world multi-task physiological
measurement. For fair and comprehensive evaluations, this paper proposed a
large-scale multi-task generalization benchmark, named Multi-Source Synsemantic
Domain Generalization (MSSDG) protocol. Extensive experiments with MSSDG and
intra-dataset have shown the effectiveness and efficiency of PhysMLE. In
addition, a new dataset was collected and made publicly available to meet the
needs of the MSSDG.
|
[
{
"created": "Fri, 10 May 2024 02:36:54 GMT",
"version": "v1"
}
] |
2024-05-13
|
[
[
"Wang",
"Jiyao",
""
],
[
"Lu",
"Hao",
""
],
[
"Wang",
"Ange",
""
],
[
"Yang",
"Xiao",
""
],
[
"Chen",
"Yingcong",
""
],
[
"He",
"Dengbo",
""
],
[
"Wu",
"Kaishun",
""
]
] |
Remote photoplethysmography (rPPG) has been widely applied to measure heart rate from face videos. To increase the generalizability of the algorithms, domain generalization (DG) attracted increasing attention in rPPG. However, when rPPG is extended to simultaneously measure more vital signs (e.g., respiration and blood oxygen saturation), achieving generalizability brings new challenges. Although partial features shared among different physiological signals can benefit multi-task learning, the sparse and imbalanced target label space brings the seesaw effect over task-specific feature learning. To resolve this problem, we designed an end-to-end Mixture of Low-rank Experts for multi-task remote Physiological measurement (PhysMLE), which is based on multiple low-rank experts with a novel router mechanism, thereby enabling the model to adeptly handle both specifications and correlations within tasks. Additionally, we introduced prior knowledge from physiology among tasks to overcome the imbalance of label space under real-world multi-task physiological measurement. For fair and comprehensive evaluations, this paper proposed a large-scale multi-task generalization benchmark, named Multi-Source Synsemantic Domain Generalization (MSSDG) protocol. Extensive experiments with MSSDG and intra-dataset have shown the effectiveness and efficiency of PhysMLE. In addition, a new dataset was collected and made publicly available to meet the needs of the MSSDG.
|
1910.09589
|
Vassilis N. Ioannidis
|
Vassilis N. Ioannidis, Dimitris Berberidis, Georgios B. Giannakis
|
GraphSAC: Detecting anomalies in large-scale graphs
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph-based sampling and consensus (GraphSAC) approach is introduced to
effectively detect anomalous nodes in large-scale graphs. Existing approaches
rely on connectivity and attributes of all nodes to assign an anomaly score per
node. However, nodal attributes and network links might be compromised by
adversaries, rendering these holistic approaches vulnerable. Alleviating this
limitation, GraphSAC randomly draws subsets of nodes, and relies on graph-aware
criteria to judiciously filter out sets contaminated by anomalous nodes, before
employing a semi-supervised learning (SSL) module to estimate nominal label
distributions per node. These learned nominal distributions are minimally
affected by the anomalous nodes, and hence can be directly adopted for anomaly
detection. Rigorous analysis provides performance guarantees for GraphSAC, by
bounding the required number of draws. The per-draw complexity grows linearly
with the number of edges, which implies efficient SSL, while draws can be run
in parallel, thereby ensuring scalability to large graphs. GraphSAC is tested
under different anomaly generation models based on random walks, clustered
anomalies, as well as contemporary adversarial attacks for graph data.
Experiments with real-world graphs showcase the advantage of GraphSAC relative
to state-of-the-art alternatives.
|
[
{
"created": "Mon, 21 Oct 2019 18:30:03 GMT",
"version": "v1"
}
] |
2019-10-23
|
[
[
"Ioannidis",
"Vassilis N.",
""
],
[
"Berberidis",
"Dimitris",
""
],
[
"Giannakis",
"Georgios B.",
""
]
] |
A graph-based sampling and consensus (GraphSAC) approach is introduced to effectively detect anomalous nodes in large-scale graphs. Existing approaches rely on connectivity and attributes of all nodes to assign an anomaly score per node. However, nodal attributes and network links might be compromised by adversaries, rendering these holistic approaches vulnerable. Alleviating this limitation, GraphSAC randomly draws subsets of nodes, and relies on graph-aware criteria to judiciously filter out sets contaminated by anomalous nodes, before employing a semi-supervised learning (SSL) module to estimate nominal label distributions per node. These learned nominal distributions are minimally affected by the anomalous nodes, and hence can be directly adopted for anomaly detection. Rigorous analysis provides performance guarantees for GraphSAC, by bounding the required number of draws. The per-draw complexity grows linearly with the number of edges, which implies efficient SSL, while draws can be run in parallel, thereby ensuring scalability to large graphs. GraphSAC is tested under different anomaly generation models based on random walks, clustered anomalies, as well as contemporary adversarial attacks for graph data. Experiments with real-world graphs showcase the advantage of GraphSAC relative to state-of-the-art alternatives.
|
1909.05163
|
Ziqi Wang
|
Ziqi Wang, Jiahui Li, Seyran Khademi, Jan van Gemert
|
Attention-Aware Age-Agnostic Visual Place Recognition
|
Presented at ICCV WORKSHOP ON E-HERITAGE 2019, Seoul, South Korea
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A cross-domain visual place recognition (VPR) task is proposed in this work,
i.e., matching images of the same architectures depicted in different domains.
VPR is commonly treated as an image retrieval task, where a query image from an
unknown location is matched with relevant instances from geo-tagged gallery
database. Different from conventional VPR settings where the query images and
gallery images come from the same domain, we propose a more common but
challenging setup where the query images are collected under a new unseen
condition. The two domains involved in this work are contemporary street view
images of Amsterdam from the Mapillary dataset (source domain) and historical
images of the same city from Beeldbank dataset (target domain). We tailored an
age-invariant feature learning CNN that can focus on domain invariant objects
and learn to match images based on a weakly supervised ranking loss. We propose
an attention aggregation module that is robust to domain discrepancy between
the train and the test data. Further, a multi-kernel maximum mean discrepancy
(MK-MMD) domain adaptation loss is adopted to improve the cross-domain ranking
performance. Both attention and adaptation modules are unsupervised while the
ranking loss uses weak supervision. Visual inspection shows that the attention
module focuses on built forms while the dramatically changing environment are
less weighed. Our proposed CNN achieves state of the art results (99% accuracy)
on the single-domain VPR task and 20% accuracy at its best on the cross-domain
VPR task, revealing the difficulty of age-invariant VPR.
|
[
{
"created": "Wed, 11 Sep 2019 16:04:42 GMT",
"version": "v1"
}
] |
2019-09-12
|
[
[
"Wang",
"Ziqi",
""
],
[
"Li",
"Jiahui",
""
],
[
"Khademi",
"Seyran",
""
],
[
"van Gemert",
"Jan",
""
]
] |
A cross-domain visual place recognition (VPR) task is proposed in this work, i.e., matching images of the same architectures depicted in different domains. VPR is commonly treated as an image retrieval task, where a query image from an unknown location is matched with relevant instances from geo-tagged gallery database. Different from conventional VPR settings where the query images and gallery images come from the same domain, we propose a more common but challenging setup where the query images are collected under a new unseen condition. The two domains involved in this work are contemporary street view images of Amsterdam from the Mapillary dataset (source domain) and historical images of the same city from Beeldbank dataset (target domain). We tailored an age-invariant feature learning CNN that can focus on domain invariant objects and learn to match images based on a weakly supervised ranking loss. We propose an attention aggregation module that is robust to domain discrepancy between the train and the test data. Further, a multi-kernel maximum mean discrepancy (MK-MMD) domain adaptation loss is adopted to improve the cross-domain ranking performance. Both attention and adaptation modules are unsupervised while the ranking loss uses weak supervision. Visual inspection shows that the attention module focuses on built forms while the dramatically changing environment are less weighed. Our proposed CNN achieves state of the art results (99% accuracy) on the single-domain VPR task and 20% accuracy at its best on the cross-domain VPR task, revealing the difficulty of age-invariant VPR.
|
1201.3307
|
Erwan Le Martelot
|
Erwan Le Martelot and Chris Hankin
|
Multi-scale Community Detection using Stability Optimisation within
Greedy Algorithms
|
This paper is an extension of the paper named "Multi-scale Community
Detection using Stability as Optimisation Criterion in a Greedy Algorithm" by
the same authors published in Proc. of the 2011 Int. Conf. on Knowledge
Discovery and Information Retrieval (KDIR 2011), SciTePress, 2011, 216-225
| null | null | null |
cs.DS cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many real systems can be represented as networks whose analysis can be very
informative regarding the original system's organisation. In the past decade
community detection received a lot of attention and is now an active field of
research. Recently stability was introduced as a new measure for partition
quality. This work investigates stability as an optimisation criterion that
exploits a Markov process view of networks to enable multi-scale community
detection. Several heuristics and variations of an algorithm optimising
stability are presented as well as an application to overlapping communities.
Experiments show that the method enables accurate multi-scale network analysis.
|
[
{
"created": "Mon, 16 Jan 2012 16:25:09 GMT",
"version": "v1"
}
] |
2015-03-20
|
[
[
"Martelot",
"Erwan Le",
""
],
[
"Hankin",
"Chris",
""
]
] |
Many real systems can be represented as networks whose analysis can be very informative regarding the original system's organisation. In the past decade community detection received a lot of attention and is now an active field of research. Recently stability was introduced as a new measure for partition quality. This work investigates stability as an optimisation criterion that exploits a Markov process view of networks to enable multi-scale community detection. Several heuristics and variations of an algorithm optimising stability are presented as well as an application to overlapping communities. Experiments show that the method enables accurate multi-scale network analysis.
|
2308.10548
|
Qiqi Gu
|
Qiqi Gu and Wei Ke
|
Typing Composable Coroutines
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Coroutine, as a powerful programming construct, is widely used in
asynchronous applications to replace thread-based programming or the callback
hell. Using coroutines makes code more readable and maintainable, for its
ability to transfer control while keeping the literal scope. However, reasoning
about coroutine behavior can be challenging without proper typing. We propose a
type notation and calculus for composing asymmetric, first-class, stackless
coroutines. Given the types of a list of coroutines, we can compute a composed
type matching the collective behavior of the coroutines, so that the input and
output can be type-checked by a type system. Our coroutine types can model the
data received by or yielded from a coroutine, which be of coroutine types as
well. On top of our type calculus, we discuss its soundness and evaluation
issues, then provide four application scenarios of our coroutine types. Not
only can our types be used in modern programming languages, such as Python, but
also model program behaviors in OCaml and even Prolog.
|
[
{
"created": "Mon, 21 Aug 2023 08:04:07 GMT",
"version": "v1"
}
] |
2024-05-17
|
[
[
"Gu",
"Qiqi",
""
],
[
"Ke",
"Wei",
""
]
] |
Coroutine, as a powerful programming construct, is widely used in asynchronous applications to replace thread-based programming or the callback hell. Using coroutines makes code more readable and maintainable, for its ability to transfer control while keeping the literal scope. However, reasoning about coroutine behavior can be challenging without proper typing. We propose a type notation and calculus for composing asymmetric, first-class, stackless coroutines. Given the types of a list of coroutines, we can compute a composed type matching the collective behavior of the coroutines, so that the input and output can be type-checked by a type system. Our coroutine types can model the data received by or yielded from a coroutine, which be of coroutine types as well. On top of our type calculus, we discuss its soundness and evaluation issues, then provide four application scenarios of our coroutine types. Not only can our types be used in modern programming languages, such as Python, but also model program behaviors in OCaml and even Prolog.
|
2212.01215
|
Xu Chen
|
Qingze Fang and Zhiwei Zhai and Shuai Yu and Qiong Wu and Xiaowen Gong
and Xu Chen
|
Olive Branch Learning: A Topology-Aware Federated Learning Framework for
Space-Air-Ground Integrated Network
|
accepted by IEEE Transactions on Wireless Communications, Dec. 2022
| null | null | null |
cs.NI cs.AI cs.DC cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The space-air-ground integrated network (SAGIN), one of the key technologies
for next-generation mobile communication systems, can facilitate data
transmission for users all over the world, especially in some remote areas
where vast amounts of informative data are collected by Internet of remote
things (IoRT) devices to support various data-driven artificial intelligence
(AI) services. However, training AI models centrally with the assistance of
SAGIN faces the challenges of highly constrained network topology, inefficient
data transmission, and privacy issues. To tackle these challenges, we first
propose a novel topology-aware federated learning framework for the SAGIN,
namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the
ground layer leverage their private data to perform model training locally,
while the air nodes in the air layer and the ring-structured low earth orbit
(LEO) satellite constellation in the space layer are in charge of model
aggregation (synchronization) at different scales.To further enhance
communication efficiency and inference performance of OBL, an efficient
Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm
is designed by taking the data class distribution of the air nodes as well as
their geographic locations into account. Furthermore, we extend our OBL
framework and CNASA algorithm to adapt to more complex multi-orbit satellite
networks. We analyze the convergence of our OBL framework and conclude that the
CNASA algorithm contributes to the fast convergence of the global model.
Extensive experiments based on realistic datasets corroborate the superior
performance of our algorithm over the benchmark policies.
|
[
{
"created": "Fri, 2 Dec 2022 14:51:42 GMT",
"version": "v1"
}
] |
2022-12-05
|
[
[
"Fang",
"Qingze",
""
],
[
"Zhai",
"Zhiwei",
""
],
[
"Yu",
"Shuai",
""
],
[
"Wu",
"Qiong",
""
],
[
"Gong",
"Xiaowen",
""
],
[
"Chen",
"Xu",
""
]
] |
The space-air-ground integrated network (SAGIN), one of the key technologies for next-generation mobile communication systems, can facilitate data transmission for users all over the world, especially in some remote areas where vast amounts of informative data are collected by Internet of remote things (IoRT) devices to support various data-driven artificial intelligence (AI) services. However, training AI models centrally with the assistance of SAGIN faces the challenges of highly constrained network topology, inefficient data transmission, and privacy issues. To tackle these challenges, we first propose a novel topology-aware federated learning framework for the SAGIN, namely Olive Branch Learning (OBL). Specifically, the IoRT devices in the ground layer leverage their private data to perform model training locally, while the air nodes in the air layer and the ring-structured low earth orbit (LEO) satellite constellation in the space layer are in charge of model aggregation (synchronization) at different scales.To further enhance communication efficiency and inference performance of OBL, an efficient Communication and Non-IID-aware Air node-Satellite Assignment (CNASA) algorithm is designed by taking the data class distribution of the air nodes as well as their geographic locations into account. Furthermore, we extend our OBL framework and CNASA algorithm to adapt to more complex multi-orbit satellite networks. We analyze the convergence of our OBL framework and conclude that the CNASA algorithm contributes to the fast convergence of the global model. Extensive experiments based on realistic datasets corroborate the superior performance of our algorithm over the benchmark policies.
|
2405.09273
|
Jo\~ao Vitor Pamplona
|
Jan Pablo Burgard and Jo\~ao Vitor Pamplona
|
Fair Generalized Linear Mixed Models
|
25 pages, 12 figures. arXiv admin note: text overlap with
arXiv:2405.06433
| null | null | null |
cs.LG math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
When using machine learning for automated prediction, it is important to
account for fairness in the prediction. Fairness in machine learning aims to
ensure that biases in the data and model inaccuracies do not lead to
discriminatory decisions. E.g., predictions from fair machine learning models
should not discriminate against sensitive variables such as sexual orientation
and ethnicity. The training data often in obtained from social surveys. In
social surveys, oftentimes the data collection process is a strata sampling,
e.g. due to cost restrictions. In strata samples, the assumption of
independence between the observation is not fulfilled. Hence, if the machine
learning models do not account for the strata correlations, the results may be
biased. Especially high is the bias in cases where the strata assignment is
correlated to the variable of interest. We present in this paper an algorithm
that can handle both problems simultaneously, and we demonstrate the impact of
stratified sampling on the quality of fair machine learning predictions in a
reproducible simulation study.
|
[
{
"created": "Wed, 15 May 2024 11:42:41 GMT",
"version": "v1"
},
{
"created": "Wed, 22 May 2024 06:08:03 GMT",
"version": "v2"
}
] |
2024-05-24
|
[
[
"Burgard",
"Jan Pablo",
""
],
[
"Pamplona",
"João Vitor",
""
]
] |
When using machine learning for automated prediction, it is important to account for fairness in the prediction. Fairness in machine learning aims to ensure that biases in the data and model inaccuracies do not lead to discriminatory decisions. E.g., predictions from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity. The training data often in obtained from social surveys. In social surveys, oftentimes the data collection process is a strata sampling, e.g. due to cost restrictions. In strata samples, the assumption of independence between the observation is not fulfilled. Hence, if the machine learning models do not account for the strata correlations, the results may be biased. Especially high is the bias in cases where the strata assignment is correlated to the variable of interest. We present in this paper an algorithm that can handle both problems simultaneously, and we demonstrate the impact of stratified sampling on the quality of fair machine learning predictions in a reproducible simulation study.
|
2008.11695
|
Vignesh Prasad
|
Vignesh Prasad, Ruth Stock-Homburg, Jan Peters
|
Advances in Human-Robot Handshaking
|
Accepted at The 12th International Conference on Social Robotics
(ICSR 2020) 12 Pages, 1 Figure
| null |
10.1007/978-3-030-62056-1_40
| null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The use of social, anthropomorphic robots to support humans in various
industries has been on the rise. During Human-Robot Interaction (HRI),
physically interactive non-verbal behaviour is key for more natural
interactions. Handshaking is one such natural interaction used commonly in many
social contexts. It is one of the first non-verbal interactions which takes
place and should, therefore, be part of the repertoire of a social robot. In
this paper, we explore the existing state of Human-Robot Handshaking and
discuss possible ways forward for such physically interactive behaviours.
|
[
{
"created": "Wed, 26 Aug 2020 17:35:06 GMT",
"version": "v1"
}
] |
2020-11-20
|
[
[
"Prasad",
"Vignesh",
""
],
[
"Stock-Homburg",
"Ruth",
""
],
[
"Peters",
"Jan",
""
]
] |
The use of social, anthropomorphic robots to support humans in various industries has been on the rise. During Human-Robot Interaction (HRI), physically interactive non-verbal behaviour is key for more natural interactions. Handshaking is one such natural interaction used commonly in many social contexts. It is one of the first non-verbal interactions which takes place and should, therefore, be part of the repertoire of a social robot. In this paper, we explore the existing state of Human-Robot Handshaking and discuss possible ways forward for such physically interactive behaviours.
|
2406.13201
|
Yicong Li
|
Yicong Li, Yu Yang, Jiannong Cao, Shuaiqi Liu, Haoran Tang, Guandong
Xu
|
Toward Structure Fairness in Dynamic Graph Embedding: A Trend-aware Dual
Debiasing Approach
| null | null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies successfully learned static graph embeddings that are
structurally fair by preventing the effectiveness disparity of high- and
low-degree vertex groups in downstream graph mining tasks. However, achieving
structure fairness in dynamic graph embedding remains an open problem.
Neglecting degree changes in dynamic graphs will significantly impair embedding
effectiveness without notably improving structure fairness. This is because the
embedding performance of high-degree and low-to-high-degree vertices will
significantly drop close to the generally poorer embedding performance of most
slightly changed vertices in the long-tail part of the power-law distribution.
We first identify biased structural evolutions in a dynamic graph based on the
evolving trend of vertex degree and then propose FairDGE, the first
structurally Fair Dynamic Graph Embedding algorithm. FairDGE learns biased
structural evolutions by jointly embedding the connection changes among
vertices and the long-short-term evolutionary trend of vertex degrees.
Furthermore, a novel dual debiasing approach is devised to encode fair
embeddings contrastively, customizing debiasing strategies for different biased
structural evolutions. This innovative debiasing strategy breaks the
effectiveness bottleneck of embeddings without notable fairness loss. Extensive
experiments demonstrate that FairDGE achieves simultaneous improvement in the
effectiveness and fairness of embeddings.
|
[
{
"created": "Wed, 19 Jun 2024 04:20:12 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Li",
"Yicong",
""
],
[
"Yang",
"Yu",
""
],
[
"Cao",
"Jiannong",
""
],
[
"Liu",
"Shuaiqi",
""
],
[
"Tang",
"Haoran",
""
],
[
"Xu",
"Guandong",
""
]
] |
Recent studies successfully learned static graph embeddings that are structurally fair by preventing the effectiveness disparity of high- and low-degree vertex groups in downstream graph mining tasks. However, achieving structure fairness in dynamic graph embedding remains an open problem. Neglecting degree changes in dynamic graphs will significantly impair embedding effectiveness without notably improving structure fairness. This is because the embedding performance of high-degree and low-to-high-degree vertices will significantly drop close to the generally poorer embedding performance of most slightly changed vertices in the long-tail part of the power-law distribution. We first identify biased structural evolutions in a dynamic graph based on the evolving trend of vertex degree and then propose FairDGE, the first structurally Fair Dynamic Graph Embedding algorithm. FairDGE learns biased structural evolutions by jointly embedding the connection changes among vertices and the long-short-term evolutionary trend of vertex degrees. Furthermore, a novel dual debiasing approach is devised to encode fair embeddings contrastively, customizing debiasing strategies for different biased structural evolutions. This innovative debiasing strategy breaks the effectiveness bottleneck of embeddings without notable fairness loss. Extensive experiments demonstrate that FairDGE achieves simultaneous improvement in the effectiveness and fairness of embeddings.
|
2008.07669
|
Albert Gu
|
Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Re
|
HiPPO: Recurrent Memory with Optimal Polynomial Projections
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central problem in learning from sequential data is representing cumulative
history in an incremental fashion as more data is processed. We introduce a
general framework (HiPPO) for the online compression of continuous signals and
discrete time series by projection onto polynomial bases. Given a measure that
specifies the importance of each time step in the past, HiPPO produces an
optimal solution to a natural online function approximation problem. As special
cases, our framework yields a short derivation of the recent Legendre Memory
Unit (LMU) from first principles, and generalizes the ubiquitous gating
mechanism of recurrent neural networks such as GRUs. This formal framework
yields a new memory update mechanism (HiPPO-LegS) that scales through time to
remember all history, avoiding priors on the timescale. HiPPO-LegS enjoys the
theoretical benefits of timescale robustness, fast updates, and bounded
gradients. By incorporating the memory dynamics into recurrent neural networks,
HiPPO RNNs can empirically capture complex temporal dependencies. On the
benchmark permuted MNIST dataset, HiPPO-LegS sets a new state-of-the-art
accuracy of 98.3%. Finally, on a novel trajectory classification task testing
robustness to out-of-distribution timescales and missing data, HiPPO-LegS
outperforms RNN and neural ODE baselines by 25-40% accuracy.
|
[
{
"created": "Mon, 17 Aug 2020 23:39:33 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2020 02:48:03 GMT",
"version": "v2"
}
] |
2020-10-26
|
[
[
"Gu",
"Albert",
""
],
[
"Dao",
"Tri",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Rudra",
"Atri",
""
],
[
"Re",
"Christopher",
""
]
] |
A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed. We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto polynomial bases. Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem. As special cases, our framework yields a short derivation of the recent Legendre Memory Unit (LMU) from first principles, and generalizes the ubiquitous gating mechanism of recurrent neural networks such as GRUs. This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale. HiPPO-LegS enjoys the theoretical benefits of timescale robustness, fast updates, and bounded gradients. By incorporating the memory dynamics into recurrent neural networks, HiPPO RNNs can empirically capture complex temporal dependencies. On the benchmark permuted MNIST dataset, HiPPO-LegS sets a new state-of-the-art accuracy of 98.3%. Finally, on a novel trajectory classification task testing robustness to out-of-distribution timescales and missing data, HiPPO-LegS outperforms RNN and neural ODE baselines by 25-40% accuracy.
|
2206.06232
|
Maksym Andriushchenko
|
Maksym Andriushchenko, Nicolas Flammarion
|
Towards Understanding Sharpness-Aware Minimization
|
The camera-ready version (accepted at ICML 2022)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sharpness-Aware Minimization (SAM) is a recent training method that relies on
worst-case weight perturbations which significantly improves generalization in
various settings. We argue that the existing justifications for the success of
SAM which are based on a PAC-Bayes generalization bound and the idea of
convergence to flat minima are incomplete. Moreover, there are no explanations
for the success of using $m$-sharpness in SAM which has been shown as essential
for generalization. To better understand this aspect of SAM, we theoretically
analyze its implicit bias for diagonal linear networks. We prove that SAM
always chooses a solution that enjoys better generalization properties than
standard gradient descent for a certain class of problems, and this effect is
amplified by using $m$-sharpness. We further study the properties of the
implicit bias on non-linear networks empirically, where we show that
fine-tuning a standard model with SAM can lead to significant generalization
improvements. Finally, we provide convergence results of SAM for non-convex
objectives when used with stochastic gradients. We illustrate these results
empirically for deep networks and discuss their relation to the generalization
behavior of SAM. The code of our experiments is available at
https://github.com/tml-epfl/understanding-sam.
|
[
{
"created": "Mon, 13 Jun 2022 15:07:32 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Andriushchenko",
"Maksym",
""
],
[
"Flammarion",
"Nicolas",
""
]
] |
Sharpness-Aware Minimization (SAM) is a recent training method that relies on worst-case weight perturbations which significantly improves generalization in various settings. We argue that the existing justifications for the success of SAM which are based on a PAC-Bayes generalization bound and the idea of convergence to flat minima are incomplete. Moreover, there are no explanations for the success of using $m$-sharpness in SAM which has been shown as essential for generalization. To better understand this aspect of SAM, we theoretically analyze its implicit bias for diagonal linear networks. We prove that SAM always chooses a solution that enjoys better generalization properties than standard gradient descent for a certain class of problems, and this effect is amplified by using $m$-sharpness. We further study the properties of the implicit bias on non-linear networks empirically, where we show that fine-tuning a standard model with SAM can lead to significant generalization improvements. Finally, we provide convergence results of SAM for non-convex objectives when used with stochastic gradients. We illustrate these results empirically for deep networks and discuss their relation to the generalization behavior of SAM. The code of our experiments is available at https://github.com/tml-epfl/understanding-sam.
|
2107.07703
|
Otmar Ertl
|
Otmar Ertl
|
Estimation from Partially Sampled Distributed Traces
| null | null | null | null |
cs.DS cs.DC stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sampling is often a necessary evil to reduce the processing and storage costs
of distributed tracing. In this work, we describe a scalable and adaptive
sampling approach that can preserve events of interest better than the widely
used head-based sampling approach. Sampling rates can be chosen individually
and independently for every span, allowing to take span attributes and local
resource constraints into account. The resulting traces are often only
partially and not completely sampled which complicates statistical analysis. To
exploit the given information, an unbiased estimation algorithm is presented.
Even though it does not need to know whether the traces are complete, it
reduces the estimation error in many cases compared to considering only
complete traces.
|
[
{
"created": "Fri, 16 Jul 2021 04:41:24 GMT",
"version": "v1"
}
] |
2021-07-19
|
[
[
"Ertl",
"Otmar",
""
]
] |
Sampling is often a necessary evil to reduce the processing and storage costs of distributed tracing. In this work, we describe a scalable and adaptive sampling approach that can preserve events of interest better than the widely used head-based sampling approach. Sampling rates can be chosen individually and independently for every span, allowing to take span attributes and local resource constraints into account. The resulting traces are often only partially and not completely sampled which complicates statistical analysis. To exploit the given information, an unbiased estimation algorithm is presented. Even though it does not need to know whether the traces are complete, it reduces the estimation error in many cases compared to considering only complete traces.
|
2306.14850
|
Till Fluschnik
|
Eva Michelle Deltl, Till Fluschnik, Robert Bredereck
|
Algorithmics of Egalitarian versus Equitable Sequences of Committees
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the election of sequences of committees, where in each of $\tau$
levels (e.g. modeling points in time) a committee consisting of $k$ candidates
from a common set of $m$ candidates is selected. For each level, each of $n$
agents (voters) may nominate one candidate whose selection would satisfy her.
We are interested in committees which are good with respect to the satisfaction
per day and per agent. More precisely, we look for egalitarian or equitable
committee sequences. While both guarantee that at least $x$ agents per day are
satisfied, egalitarian committee sequences ensure that each agent is satisfied
in at least $y$ levels while equitable committee sequences ensure that each
agent is satisfied in exactly $y$ levels. We analyze the parameterized
complexity of finding such committees for the parameters $n,m,k,\tau,x$, and
$y$, as well as combinations thereof.
|
[
{
"created": "Mon, 26 Jun 2023 17:02:18 GMT",
"version": "v1"
}
] |
2023-06-27
|
[
[
"Deltl",
"Eva Michelle",
""
],
[
"Fluschnik",
"Till",
""
],
[
"Bredereck",
"Robert",
""
]
] |
We study the election of sequences of committees, where in each of $\tau$ levels (e.g. modeling points in time) a committee consisting of $k$ candidates from a common set of $m$ candidates is selected. For each level, each of $n$ agents (voters) may nominate one candidate whose selection would satisfy her. We are interested in committees which are good with respect to the satisfaction per day and per agent. More precisely, we look for egalitarian or equitable committee sequences. While both guarantee that at least $x$ agents per day are satisfied, egalitarian committee sequences ensure that each agent is satisfied in at least $y$ levels while equitable committee sequences ensure that each agent is satisfied in exactly $y$ levels. We analyze the parameterized complexity of finding such committees for the parameters $n,m,k,\tau,x$, and $y$, as well as combinations thereof.
|
cs/0408037
|
Joergen Villadsen
|
J{\o}rgen Villadsen
|
Multi-dimensional Type Theory: Rules, Categories, and Combinators for
Syntax and Semantics
|
20 pages
| null | null | null |
cs.CL cs.AI cs.LO
| null |
We investigate the possibility of modelling the syntax and semantics of
natural language by constraints, or rules, imposed by the multi-dimensional
type theory Nabla. The only multiplicity we explicitly consider is two, namely
one dimension for the syntax and one dimension for the semantics, but the
general perspective is important. For example, issues of pragmatics could be
handled as additional dimensions.
One of the main problems addressed is the rather complicated repertoire of
operations that exists besides the notion of categories in traditional Montague
grammar. For the syntax we use a categorial grammar along the lines of Lambek.
For the semantics we use so-called lexical and logical combinators inspired by
work in natural logic. Nabla provides a concise interpretation and a sequent
calculus as the basis for implementations.
|
[
{
"created": "Sun, 15 Aug 2004 08:51:19 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Villadsen",
"Jørgen",
""
]
] |
We investigate the possibility of modelling the syntax and semantics of natural language by constraints, or rules, imposed by the multi-dimensional type theory Nabla. The only multiplicity we explicitly consider is two, namely one dimension for the syntax and one dimension for the semantics, but the general perspective is important. For example, issues of pragmatics could be handled as additional dimensions. One of the main problems addressed is the rather complicated repertoire of operations that exists besides the notion of categories in traditional Montague grammar. For the syntax we use a categorial grammar along the lines of Lambek. For the semantics we use so-called lexical and logical combinators inspired by work in natural logic. Nabla provides a concise interpretation and a sequent calculus as the basis for implementations.
|
1801.02745
|
Jo\~ao Ribeiro
|
Mahdi Cheraghchi and Jo\~ao Ribeiro
|
Structural Results and Improved Upper Bounds on the Capacity of the
Discrete-Time Poisson Channel
|
28 pages, 3 figures. Added an appendix and made small edits
throughout the paper. A preliminary version of this paper appears in the
Proceedings of the IEEE International Symposium on Information Theory, 2018
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
New capacity upper bounds are presented for the discrete-time Poisson channel
with no dark current and an average-power constraint. These bounds are a simple
consequence of techniques developed for the seemingly unrelated problem of
upper bounding the capacity of binary deletion and repetition channels.
Previously, the best known capacity upper bound in the regime where the
average-power constraint does not approach zero was due to Martinez (JOSA B,
2007), which is re-derived as a special case of the framework developed in this
paper. Furthermore, this framework is carefully instantiated in order to obtain
a closed-form bound that noticeably improves the result of Martinez everywhere.
Finally, capacity-achieving distributions for the discrete-time Poisson channel
are studied under an average-power constraint and/or a peak-power constraint
and arbitrary dark current. In particular, it is shown that the support of the
capacity-achieving distribution under an average-power constraint only must be
countably infinite. This settles a conjecture of Shamai (IEE Proceedings I,
1990) in the affirmative. Previously, it was only known that the support must
be unbounded.
|
[
{
"created": "Tue, 9 Jan 2018 01:43:19 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jan 2018 02:52:38 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Jan 2018 18:32:03 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Mar 2018 17:23:33 GMT",
"version": "v4"
},
{
"created": "Sat, 7 Jul 2018 15:23:38 GMT",
"version": "v5"
},
{
"created": "Fri, 20 Jul 2018 11:19:56 GMT",
"version": "v6"
}
] |
2018-07-23
|
[
[
"Cheraghchi",
"Mahdi",
""
],
[
"Ribeiro",
"João",
""
]
] |
New capacity upper bounds are presented for the discrete-time Poisson channel with no dark current and an average-power constraint. These bounds are a simple consequence of techniques developed for the seemingly unrelated problem of upper bounding the capacity of binary deletion and repetition channels. Previously, the best known capacity upper bound in the regime where the average-power constraint does not approach zero was due to Martinez (JOSA B, 2007), which is re-derived as a special case of the framework developed in this paper. Furthermore, this framework is carefully instantiated in order to obtain a closed-form bound that noticeably improves the result of Martinez everywhere. Finally, capacity-achieving distributions for the discrete-time Poisson channel are studied under an average-power constraint and/or a peak-power constraint and arbitrary dark current. In particular, it is shown that the support of the capacity-achieving distribution under an average-power constraint only must be countably infinite. This settles a conjecture of Shamai (IEE Proceedings I, 1990) in the affirmative. Previously, it was only known that the support must be unbounded.
|
1604.05492
|
Geoffroy Fouquier
|
Guillaume Pitel, Geoffroy Fouquier, Emmanuel Marchand and Abdul
Mouhamadsultane
|
Count-Min Tree Sketch: Approximate counting for NLP
|
submitted to the second International Symposium on Web Algorithms
(iSwag'2016). arXiv admin note: text overlap with arXiv:1502.04885, In the
proceedings of the Second International Symposium on Web Algorithms (iSWAG
2016), June 9-10, 2016, Deauville, Normandy, France
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Count-Min Sketch is a widely adopted structure for approximate event
counting in large scale processing. In a previous work we improved the original
version of the Count-Min-Sketch (CMS) with conservative update using
approximate counters instead of linear counters. These structures are
computationaly efficient and improve the average relative error (ARE) of a CMS
at constant memory footprint. These improvements are well suited for NLP tasks,
in which one is interested by the low-frequency items. However, if Log counters
allow to improve ARE, they produce a residual error due to the approximation.
In this paper, we propose the Count-Min Tree Sketch (Copyright 2016 eXenSa. All
rights reserved) variant with pyramidal counters, which are focused toward
taking advantage of the Zipfian distribution of text data.
|
[
{
"created": "Tue, 19 Apr 2016 09:51:34 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Apr 2016 09:44:51 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Jun 2016 06:15:34 GMT",
"version": "v3"
}
] |
2016-06-16
|
[
[
"Pitel",
"Guillaume",
""
],
[
"Fouquier",
"Geoffroy",
""
],
[
"Marchand",
"Emmanuel",
""
],
[
"Mouhamadsultane",
"Abdul",
""
]
] |
The Count-Min Sketch is a widely adopted structure for approximate event counting in large scale processing. In a previous work we improved the original version of the Count-Min-Sketch (CMS) with conservative update using approximate counters instead of linear counters. These structures are computationaly efficient and improve the average relative error (ARE) of a CMS at constant memory footprint. These improvements are well suited for NLP tasks, in which one is interested by the low-frequency items. However, if Log counters allow to improve ARE, they produce a residual error due to the approximation. In this paper, we propose the Count-Min Tree Sketch (Copyright 2016 eXenSa. All rights reserved) variant with pyramidal counters, which are focused toward taking advantage of the Zipfian distribution of text data.
|
2211.09571
|
Martin Bullinger
|
Felix Brandt and Martin Bullinger and Ana\"elle Wilczynski
|
Reaching Individually Stable Coalition Structures
|
A preliminary version of this article appeared in the Proceedings of
the 35th AAAI Conference on Artificial Intelligence (2021)
| null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
The formal study of coalition formation in multi-agent systems is typically
realized in the framework of hedonic games, which originate from economic
theory. The main focus of this branch of research has been on the existence and
the computational complexity of deciding the existence of coalition structures
that satisfy various stability criteria. The actual process of forming
coalitions based on individual behavior has received little attention. In this
paper, we study the convergence of simple dynamics leading to stable partitions
in a variety of established classes of hedonic games including anonymous,
dichotomous, fractional, and hedonic diversity games. The dynamics we consider
is based on individual stability: an agent will join another coalition if she
is better off and no member of the welcoming coalition is worse off.
Our results are threefold. First, we identify conditions for the (fast)
convergence of our dynamics. To this end, we develop new techniques based on
the simultaneous usage of multiple intertwined potential functions and
establish a reduction uncovering a close relationship between anonymous hedonic
games and hedonic diversity games. Second, we provide elaborate counterexamples
determining tight boundaries for the existence of individually stable
partitions. Third, we study the computational complexity of problems related to
the coalition formation dynamics. In particular, we settle open problems
suggested by Bogomolnaia and Jackson (2002), Brandl et al. (2005), and Boehmer
and Elkind (2020).
|
[
{
"created": "Thu, 17 Nov 2022 14:55:29 GMT",
"version": "v1"
}
] |
2022-11-18
|
[
[
"Brandt",
"Felix",
""
],
[
"Bullinger",
"Martin",
""
],
[
"Wilczynski",
"Anaëlle",
""
]
] |
The formal study of coalition formation in multi-agent systems is typically realized in the framework of hedonic games, which originate from economic theory. The main focus of this branch of research has been on the existence and the computational complexity of deciding the existence of coalition structures that satisfy various stability criteria. The actual process of forming coalitions based on individual behavior has received little attention. In this paper, we study the convergence of simple dynamics leading to stable partitions in a variety of established classes of hedonic games including anonymous, dichotomous, fractional, and hedonic diversity games. The dynamics we consider is based on individual stability: an agent will join another coalition if she is better off and no member of the welcoming coalition is worse off. Our results are threefold. First, we identify conditions for the (fast) convergence of our dynamics. To this end, we develop new techniques based on the simultaneous usage of multiple intertwined potential functions and establish a reduction uncovering a close relationship between anonymous hedonic games and hedonic diversity games. Second, we provide elaborate counterexamples determining tight boundaries for the existence of individually stable partitions. Third, we study the computational complexity of problems related to the coalition formation dynamics. In particular, we settle open problems suggested by Bogomolnaia and Jackson (2002), Brandl et al. (2005), and Boehmer and Elkind (2020).
|
1107.5397
|
Dr. Md. Headayetullah PhD
|
Md.Headayetullah, G.K. Pradhan, Sanjay Biswas, B. Puthal
|
Proposed Information Sharing Security Approach for Security Personnels,
Vertical Integration, Semantic Interoperability Architecture and Framework
for Digital Government
|
20 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper mainly depicts the conceptual overview of vertical integration,
semantic interoperability architecture such as Educational Sector Architectural
Framework (ESAF) for New Zealand government and different interoperability
framework solution for digital government. In this paper, we try to develop a
secure information sharing approach for digital government to improve home land
security. This approach is a role and cooperation based approach for security
personnel of different government departments. In order to run any successful
digital government of any country in the world, it is necessary to interact
with their citizen and to share secure information via different network among
the citizen or other government. Consequently, in order to smooth the progress
of users to cooperate with and share information without darkness and
flawlessly transversely different networks and databases universally, a safe
and trusted information-sharing environment has been renowned as a very
important requirement and to press forward homeland security endeavor. The key
incentive following this research is to put up a secure and trusted
information-sharing approach for government departments. This paper presents a
proficient function and teamwork based information sharing approach for safe
exchange of hush-hush and privileged information amid security personnels and
government departments inside the national boundaries by means of public key
cryptography. The expanded approach makes use of cryptographic hash function;
public key cryptosystem and a unique and complex mapping function for securely
swapping over secret information. Moreover, the projected approach facilitates
privacy preserving information sharing with probable restrictions based on the
rank of the security personnels.
|
[
{
"created": "Wed, 27 Jul 2011 06:48:48 GMT",
"version": "v1"
}
] |
2015-03-12
|
[
[
"Headayetullah",
"Md.",
""
],
[
"Pradhan",
"G. K.",
""
],
[
"Biswas",
"Sanjay",
""
],
[
"Puthal",
"B.",
""
]
] |
This paper mainly depicts the conceptual overview of vertical integration, semantic interoperability architecture such as Educational Sector Architectural Framework (ESAF) for New Zealand government and different interoperability framework solution for digital government. In this paper, we try to develop a secure information sharing approach for digital government to improve home land security. This approach is a role and cooperation based approach for security personnel of different government departments. In order to run any successful digital government of any country in the world, it is necessary to interact with their citizen and to share secure information via different network among the citizen or other government. Consequently, in order to smooth the progress of users to cooperate with and share information without darkness and flawlessly transversely different networks and databases universally, a safe and trusted information-sharing environment has been renowned as a very important requirement and to press forward homeland security endeavor. The key incentive following this research is to put up a secure and trusted information-sharing approach for government departments. This paper presents a proficient function and teamwork based information sharing approach for safe exchange of hush-hush and privileged information amid security personnels and government departments inside the national boundaries by means of public key cryptography. The expanded approach makes use of cryptographic hash function; public key cryptosystem and a unique and complex mapping function for securely swapping over secret information. Moreover, the projected approach facilitates privacy preserving information sharing with probable restrictions based on the rank of the security personnels.
|
2001.02632
|
Amin Sakzad
|
Amin Sakzad, Ron Steinfeld
|
Comments on "Physical-layer cryptography through massive MIMO"
|
arXiv admin note: substantial text overlap with arXiv:1507.08015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present two attacks on two different versions of physical layer
cryptography schemes based on massive multiple-input multiple-output (MIMO).
Both cryptosystems employ a singular value decomposition (SVD) precoding
technique. For the first one, we show that the eavesdropper (who knows its own
channel and the channel between legitimate users) can decrypt the information
data under the same condition as the legitimate receiver. We study the
signal-to-noise advantage ratio for decoding by the legitimate user over the
eavesdropper in a more generalized scheme when an arbitrary precoder at the
transmitter is employed. On the negative side, we show that if the eavesdropper
uses a number of receive antennas much larger than the number of legitimate
user antennas, then there is no advantage, independent of the precoding scheme
employed at the transmitter. On the positive side, for the case where the
adversary is limited to have the same number of antennas as legitimate users,
we give an $O\left(n^2\right)$ upper bound on the advantage and show that this
bound can be approached using an inverse precoder. For the second cryptosystem,
we show that the required security conditions prevent the legitimate user from
decoding the plaintext uniquely.
|
[
{
"created": "Tue, 7 Jan 2020 09:55:40 GMT",
"version": "v1"
}
] |
2020-01-09
|
[
[
"Sakzad",
"Amin",
""
],
[
"Steinfeld",
"Ron",
""
]
] |
We present two attacks on two different versions of physical layer cryptography schemes based on massive multiple-input multiple-output (MIMO). Both cryptosystems employ a singular value decomposition (SVD) precoding technique. For the first one, we show that the eavesdropper (who knows its own channel and the channel between legitimate users) can decrypt the information data under the same condition as the legitimate receiver. We study the signal-to-noise advantage ratio for decoding by the legitimate user over the eavesdropper in a more generalized scheme when an arbitrary precoder at the transmitter is employed. On the negative side, we show that if the eavesdropper uses a number of receive antennas much larger than the number of legitimate user antennas, then there is no advantage, independent of the precoding scheme employed at the transmitter. On the positive side, for the case where the adversary is limited to have the same number of antennas as legitimate users, we give an $O\left(n^2\right)$ upper bound on the advantage and show that this bound can be approached using an inverse precoder. For the second cryptosystem, we show that the required security conditions prevent the legitimate user from decoding the plaintext uniquely.
|
1901.10681
|
Marc Ru{\ss}wurm
|
Marc Ru{\ss}wurm, Nicolas Courty, R\'emi Emonet, S\'ebastien
Lef\`evre, Devis Tuia, Romain Tavenard
|
End-to-End Learned Early Classification of Time Series for In-Season
Crop Type Mapping
|
accepted for publication in ISPRS Journal of Photogrammetry and
Remote Sensing
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Remote sensing satellites capture the cyclic dynamics of our Planet in
regular time intervals recorded in satellite time series data. End-to-end
trained deep learning models use this time series data to make predictions at a
large scale, for instance, to produce up-to-date crop cover maps. Most time
series classification approaches focus on the accuracy of predictions. However,
the earliness of the prediction is also of great importance since coming to an
early decision can make a crucial difference in time-sensitive applications. In
this work, we present an End-to-End Learned Early Classification of Time Series
(ELECTS) model that estimates a classification score and a probability of
whether sufficient data has been observed to come to an early and still
accurate decision. ELECTS is modular: any deep time series classification model
can adopt the ELECTS conceptual idea by adding a second prediction head that
outputs a probability of stopping the classification. The ELECTS loss function
then optimizes the overall model on a balanced objective of earliness and
accuracy. Our experiments on four crop classification datasets from Europe and
Africa show that ELECTS allows reaching state-of-the-art accuracy while
reducing the quantity of data massively to be downloaded, stored, and
processed. The source code is available at https://github.com/marccoru/elects.
|
[
{
"created": "Wed, 30 Jan 2019 05:51:41 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 20:52:14 GMT",
"version": "v2"
}
] |
2022-12-23
|
[
[
"Rußwurm",
"Marc",
""
],
[
"Courty",
"Nicolas",
""
],
[
"Emonet",
"Rémi",
""
],
[
"Lefèvre",
"Sébastien",
""
],
[
"Tuia",
"Devis",
""
],
[
"Tavenard",
"Romain",
""
]
] |
Remote sensing satellites capture the cyclic dynamics of our Planet in regular time intervals recorded in satellite time series data. End-to-end trained deep learning models use this time series data to make predictions at a large scale, for instance, to produce up-to-date crop cover maps. Most time series classification approaches focus on the accuracy of predictions. However, the earliness of the prediction is also of great importance since coming to an early decision can make a crucial difference in time-sensitive applications. In this work, we present an End-to-End Learned Early Classification of Time Series (ELECTS) model that estimates a classification score and a probability of whether sufficient data has been observed to come to an early and still accurate decision. ELECTS is modular: any deep time series classification model can adopt the ELECTS conceptual idea by adding a second prediction head that outputs a probability of stopping the classification. The ELECTS loss function then optimizes the overall model on a balanced objective of earliness and accuracy. Our experiments on four crop classification datasets from Europe and Africa show that ELECTS allows reaching state-of-the-art accuracy while reducing the quantity of data massively to be downloaded, stored, and processed. The source code is available at https://github.com/marccoru/elects.
|
2206.13732
|
Chuanfu Shen
|
Chuanfu Shen, Shiqi Yu, Jilong Wang, George Q. Huang and Liang Wang
|
A Comprehensive Survey on Deep Gait Recognition: Algorithms, Datasets
and Challenges
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gait recognition aims to identify a person at a distance, serving as a
promising solution for long-distance and less-cooperation pedestrian
recognition. Recently, significant advancements in gait recognition have
achieved inspiring success in many challenging scenarios by utilizing deep
learning techniques. Against the backdrop that deep gait recognition has
achieved almost perfect performance in laboratory datasets, much recent
research has introduced new challenges for gait recognition, including robust
deep representation modeling, in-the-wild gait recognition, and even
recognition from new visual sensors such as infrared and depth cameras.
Meanwhile, the increasing performance of gait recognition might also reveal
concerns about biometrics security and privacy prevention for society. We
provide a comprehensive survey on recent literature using deep learning and a
discussion on the privacy and security of gait biometrics. This survey reviews
the existing deep gait recognition methods through a novel view based on our
proposed taxonomy. The proposed taxonomy differs from the conventional taxonomy
of categorizing available gait recognition methods into the model- or
appearance-based methods, while our taxonomic hierarchy considers deep gait
recognition from two perspectives: deep representation learning and deep
network architectures, illustrating the current approaches from both micro and
macro levels. We also include up-to-date reviews of datasets and performance
evaluations on diverse scenarios. Finally, we introduce privacy and security
concerns on gait biometrics and discuss outstanding challenges and potential
directions for future research.
|
[
{
"created": "Tue, 28 Jun 2022 03:36:12 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Aug 2023 06:03:55 GMT",
"version": "v2"
}
] |
2023-08-08
|
[
[
"Shen",
"Chuanfu",
""
],
[
"Yu",
"Shiqi",
""
],
[
"Wang",
"Jilong",
""
],
[
"Huang",
"George Q.",
""
],
[
"Wang",
"Liang",
""
]
] |
Gait recognition aims to identify a person at a distance, serving as a promising solution for long-distance and less-cooperation pedestrian recognition. Recently, significant advancements in gait recognition have achieved inspiring success in many challenging scenarios by utilizing deep learning techniques. Against the backdrop that deep gait recognition has achieved almost perfect performance in laboratory datasets, much recent research has introduced new challenges for gait recognition, including robust deep representation modeling, in-the-wild gait recognition, and even recognition from new visual sensors such as infrared and depth cameras. Meanwhile, the increasing performance of gait recognition might also reveal concerns about biometrics security and privacy prevention for society. We provide a comprehensive survey on recent literature using deep learning and a discussion on the privacy and security of gait biometrics. This survey reviews the existing deep gait recognition methods through a novel view based on our proposed taxonomy. The proposed taxonomy differs from the conventional taxonomy of categorizing available gait recognition methods into the model- or appearance-based methods, while our taxonomic hierarchy considers deep gait recognition from two perspectives: deep representation learning and deep network architectures, illustrating the current approaches from both micro and macro levels. We also include up-to-date reviews of datasets and performance evaluations on diverse scenarios. Finally, we introduce privacy and security concerns on gait biometrics and discuss outstanding challenges and potential directions for future research.
|
2108.02941
|
Vukosi Marivate
|
Harm de Wet, Vukosi Marivate
|
Is it Fake? News Disinformation Detection on South African News Websites
|
6 pages, Accepted and to be published in AFRICON 2021
| null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Disinformation through fake news is an ongoing problem in our society and has
become easily spread through social media. The most cost and time effective way
to filter these large amounts of data is to use a combination of human and
technical interventions to identify it. From a technical perspective, Natural
Language Processing (NLP) is widely used in detecting fake news. Social media
companies use NLP techniques to identify the fake news and warn their users,
but fake news may still slip through undetected. It is especially a problem in
more localised contexts (outside the United States of America). How do we
adjust fake news detection systems to work better for local contexts such as in
South Africa. In this work we investigate fake news detection on South African
websites. We curate a dataset of South African fake news and then train
detection models. We contrast this with using widely available fake news
datasets (from mostly USA website). We also explore making the datasets more
diverse by combining them and observe the differences in behaviour in writing
between nations' fake news using interpretable machine learning.
|
[
{
"created": "Fri, 6 Aug 2021 04:54:03 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Aug 2021 17:23:05 GMT",
"version": "v2"
}
] |
2021-08-10
|
[
[
"de Wet",
"Harm",
""
],
[
"Marivate",
"Vukosi",
""
]
] |
Disinformation through fake news is an ongoing problem in our society and has become easily spread through social media. The most cost and time effective way to filter these large amounts of data is to use a combination of human and technical interventions to identify it. From a technical perspective, Natural Language Processing (NLP) is widely used in detecting fake news. Social media companies use NLP techniques to identify the fake news and warn their users, but fake news may still slip through undetected. It is especially a problem in more localised contexts (outside the United States of America). How do we adjust fake news detection systems to work better for local contexts such as in South Africa. In this work we investigate fake news detection on South African websites. We curate a dataset of South African fake news and then train detection models. We contrast this with using widely available fake news datasets (from mostly USA website). We also explore making the datasets more diverse by combining them and observe the differences in behaviour in writing between nations' fake news using interpretable machine learning.
|
0906.4012
|
Man-On Pun
|
Man-On Pun, Kyeong Jin Kim, Ronald Iltis and H. Vincent Poor
|
Reduced-Feedback Opportunistic Scheduling and Beamforming with GMD for
MIMO-OFDMA
|
Proc. Asilomar Conference on Signals, Systems, and Computers, Pacific
Grove, CA, Nov. 2008
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Opportunistic scheduling and beamforming schemes have been proposed
previously by the authors for reduced-feedback MIMO-OFDMA downlink systems
where the MIMO channel of each subcarrier is decomposed into layered spatial
subchannels. It has been demonstrated that significant feedback reduction can
be achieved by returning information about only one beamforming matrix (BFM)
for all subcarriers from each MT, compared to one BFM for each subcarrier in
the conventional schemes. However, since the previously proposed channel
decomposition was derived based on singular value decomposition, the resulting
system performance is impaired by the subchannels associated with the smallest
singular values. To circumvent this obstacle, this work proposes improved
opportunistic scheduling and beamforming schemes based on geometric mean
decomposition-based channel decomposition. In addition to the inherent
advantage in reduced feedback, the proposed schemes can achieve improved system
performance by decomposing the MIMO channels into spatial subchannels with more
evenly distributed channel gains. Numerical results confirm the effectiveness
of the proposed opportunistic scheduling and beamforming schemes.
|
[
{
"created": "Mon, 22 Jun 2009 14:28:14 GMT",
"version": "v1"
}
] |
2009-06-23
|
[
[
"Pun",
"Man-On",
""
],
[
"Kim",
"Kyeong Jin",
""
],
[
"Iltis",
"Ronald",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
Opportunistic scheduling and beamforming schemes have been proposed previously by the authors for reduced-feedback MIMO-OFDMA downlink systems where the MIMO channel of each subcarrier is decomposed into layered spatial subchannels. It has been demonstrated that significant feedback reduction can be achieved by returning information about only one beamforming matrix (BFM) for all subcarriers from each MT, compared to one BFM for each subcarrier in the conventional schemes. However, since the previously proposed channel decomposition was derived based on singular value decomposition, the resulting system performance is impaired by the subchannels associated with the smallest singular values. To circumvent this obstacle, this work proposes improved opportunistic scheduling and beamforming schemes based on geometric mean decomposition-based channel decomposition. In addition to the inherent advantage in reduced feedback, the proposed schemes can achieve improved system performance by decomposing the MIMO channels into spatial subchannels with more evenly distributed channel gains. Numerical results confirm the effectiveness of the proposed opportunistic scheduling and beamforming schemes.
|
2104.10814
|
Paulo Rezeck
|
Paulo Rezeck, Renato M. Assuncao and Luiz Chaimowicz
|
Flocking-Segregative Swarming Behaviors using Gibbs Random Fields
|
7 pages, 11 figures, accepted by ICRA 2021
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel approach that allows a swarm of heterogeneous
robots to produce simultaneously segregative and flocking behaviors using only
local sensing. These behaviors have been widely studied in swarm robotics and
their combination allows the execution of several complex tasks, ranging from
surveillance and reconnaissance, to search and rescue, to transport, and to
foraging. Although there are several works in the literature proposing
different strategies to achieve these behaviors, to the best of our knowledge,
this paper is the first to propose an algorithm that emerges simultaneously
behaviors and do not rely on global information or communication. Our approach
consists of modeling the swarm as a Gibbs Random Field (GRF) and using
appropriate potential functions to reach segregation, cohesion and consensus on
the velocity of the swarm. Simulations and proof-of-concept experiments using
real robots are presented to evaluate the performance of our methodology in
comparison to some of the state-of-the-art works that tackle segregative
behaviors.
|
[
{
"created": "Thu, 22 Apr 2021 01:12:10 GMT",
"version": "v1"
}
] |
2021-04-23
|
[
[
"Rezeck",
"Paulo",
""
],
[
"Assuncao",
"Renato M.",
""
],
[
"Chaimowicz",
"Luiz",
""
]
] |
This paper presents a novel approach that allows a swarm of heterogeneous robots to produce simultaneously segregative and flocking behaviors using only local sensing. These behaviors have been widely studied in swarm robotics and their combination allows the execution of several complex tasks, ranging from surveillance and reconnaissance, to search and rescue, to transport, and to foraging. Although there are several works in the literature proposing different strategies to achieve these behaviors, to the best of our knowledge, this paper is the first to propose an algorithm that emerges simultaneously behaviors and do not rely on global information or communication. Our approach consists of modeling the swarm as a Gibbs Random Field (GRF) and using appropriate potential functions to reach segregation, cohesion and consensus on the velocity of the swarm. Simulations and proof-of-concept experiments using real robots are presented to evaluate the performance of our methodology in comparison to some of the state-of-the-art works that tackle segregative behaviors.
|
2207.09847
|
Sunayana Rane
|
Sunayana Rane, Mira L. Nencheva, Zeyu Wang, Casey Lew-Williams, Olga
Russakovsky, Thomas L. Griffiths
|
Predicting Word Learning in Children from the Performance of Computer
Vision Systems
|
CogSci 2023
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
For human children as well as machine learning systems, a key challenge in
learning a word is linking the word to the visual phenomena it describes. We
explore this aspect of word learning by using the performance of computer
vision systems as a proxy for the difficulty of learning a word from visual
cues. We show that the age at which children acquire different categories of
words is correlated with the performance of visual classification and
captioning systems, over and above the expected effects of word frequency. The
performance of the computer vision systems is correlated with human judgments
of the concreteness of words, which are in turn a predictor of children's word
learning, suggesting that these models are capturing the relationship between
words and visual phenomena.
|
[
{
"created": "Thu, 7 Jul 2022 22:49:32 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Oct 2022 13:40:11 GMT",
"version": "v2"
},
{
"created": "Sat, 9 Sep 2023 08:33:37 GMT",
"version": "v3"
}
] |
2023-09-12
|
[
[
"Rane",
"Sunayana",
""
],
[
"Nencheva",
"Mira L.",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Lew-Williams",
"Casey",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] |
For human children as well as machine learning systems, a key challenge in learning a word is linking the word to the visual phenomena it describes. We explore this aspect of word learning by using the performance of computer vision systems as a proxy for the difficulty of learning a word from visual cues. We show that the age at which children acquire different categories of words is correlated with the performance of visual classification and captioning systems, over and above the expected effects of word frequency. The performance of the computer vision systems is correlated with human judgments of the concreteness of words, which are in turn a predictor of children's word learning, suggesting that these models are capturing the relationship between words and visual phenomena.
|
2305.04615
|
Junkai Zhang Dr.
|
Junkai Zhang, Tharmalingam Ratnarajah
|
Performance Analysis of In-Band-Full-Duplex Multi-Cell Wideband IAB
Networks
| null |
in IEEE Access, vol. 12, pp. 47024-47040, April 2024
|
10.1109/ACCESS.2024.3382719
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper analyzes the performance of the 3rd Generation Partnership Project
(3GPP)-inspired multi-cell wideband single-hop backhaul
millimeter-wave-in-band-full-duplex (IBFD)-integrated access and backhaul (IAB)
networks by using stochastic geometry. We model the wired-connected Next
Generation NodeBs (gNBs) as the Mat\'ern hard-core point process (MHCPP) to
meet the real-world deployment requirement and reduce the cost caused by wired
connection in the network. We first derive association probabilities that
reflect how likely the typical user-equipment is served by a gNB or an IAB-node
based on the maximum long-term averaged biased-received-desired-signal power
criteria. Further, by leveraging the composite Gamma-Lognormal distribution, we
derive the closed-form signal to interference plus noise ratio coverage,
capacity with outage, and ergodic capacity of the network. In order to avoid
underestimating the noise, we consider the sidelobe gain on inter-cell
interference links and the analog to digital converter quantization noise.
Compared with the half-duplex transmission, numerical results show an enhanced
capacity with outage and ergodic capacity provided by IBFD under successful
self-interference cancellation. We also study how the power bias and density
ratio of the IAB-node to gNB, and the hard-core distance can affect system
performances.
|
[
{
"created": "Mon, 8 May 2023 10:47:32 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 08:20:56 GMT",
"version": "v2"
}
] |
2024-04-08
|
[
[
"Zhang",
"Junkai",
""
],
[
"Ratnarajah",
"Tharmalingam",
""
]
] |
This paper analyzes the performance of the 3rd Generation Partnership Project (3GPP)-inspired multi-cell wideband single-hop backhaul millimeter-wave-in-band-full-duplex (IBFD)-integrated access and backhaul (IAB) networks by using stochastic geometry. We model the wired-connected Next Generation NodeBs (gNBs) as the Mat\'ern hard-core point process (MHCPP) to meet the real-world deployment requirement and reduce the cost caused by wired connection in the network. We first derive association probabilities that reflect how likely the typical user-equipment is served by a gNB or an IAB-node based on the maximum long-term averaged biased-received-desired-signal power criteria. Further, by leveraging the composite Gamma-Lognormal distribution, we derive the closed-form signal to interference plus noise ratio coverage, capacity with outage, and ergodic capacity of the network. In order to avoid underestimating the noise, we consider the sidelobe gain on inter-cell interference links and the analog to digital converter quantization noise. Compared with the half-duplex transmission, numerical results show an enhanced capacity with outage and ergodic capacity provided by IBFD under successful self-interference cancellation. We also study how the power bias and density ratio of the IAB-node to gNB, and the hard-core distance can affect system performances.
|
2405.19009
|
Jihao Liu
|
Jihao Liu, Jinliang Zheng, Boxiao Liu, Yu Liu, Hongsheng Li
|
Enhancing Vision-Language Model with Unmasked Token Alignment
|
Accepted by TMLR; Code and models are available at
https://github.com/jihaonew/UTA
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Contrastive pre-training on image-text pairs, exemplified by CLIP, becomes a
standard technique for learning multi-modal visual-language representations.
Although CLIP has demonstrated remarkable performance, training it from scratch
on noisy web-scale datasets is computationally demanding. On the other hand,
mask-then-predict pre-training approaches, like Masked Image Modeling (MIM),
offer efficient self-supervised learning for single-modal representations. This
paper introduces Unmasked Token Alignment (UTA), a method that leverages
existing CLIP models to further enhance its vision-language representations.
UTA trains a Vision Transformer (ViT) by aligning unmasked visual tokens to the
corresponding image tokens from a frozen CLIP vision encoder, which
automatically aligns the ViT model with the CLIP text encoder. The pre-trained
ViT can be directly applied for zero-shot evaluation even without training on
image-text pairs. Compared to MIM approaches, UTA does not suffer from
training-finetuning inconsistency and is much more training-efficient by
avoiding using the extra [MASK] tokens. Extensive experimental results
demonstrate that UTA can enhance CLIP models and outperform existing MIM
methods on various uni- and multi-modal benchmarks. Code and models are
available at https://github.com/jihaonew/UTA.
|
[
{
"created": "Wed, 29 May 2024 11:48:17 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 14:29:41 GMT",
"version": "v2"
}
] |
2024-06-17
|
[
[
"Liu",
"Jihao",
""
],
[
"Zheng",
"Jinliang",
""
],
[
"Liu",
"Boxiao",
""
],
[
"Liu",
"Yu",
""
],
[
"Li",
"Hongsheng",
""
]
] |
Contrastive pre-training on image-text pairs, exemplified by CLIP, becomes a standard technique for learning multi-modal visual-language representations. Although CLIP has demonstrated remarkable performance, training it from scratch on noisy web-scale datasets is computationally demanding. On the other hand, mask-then-predict pre-training approaches, like Masked Image Modeling (MIM), offer efficient self-supervised learning for single-modal representations. This paper introduces Unmasked Token Alignment (UTA), a method that leverages existing CLIP models to further enhance its vision-language representations. UTA trains a Vision Transformer (ViT) by aligning unmasked visual tokens to the corresponding image tokens from a frozen CLIP vision encoder, which automatically aligns the ViT model with the CLIP text encoder. The pre-trained ViT can be directly applied for zero-shot evaluation even without training on image-text pairs. Compared to MIM approaches, UTA does not suffer from training-finetuning inconsistency and is much more training-efficient by avoiding using the extra [MASK] tokens. Extensive experimental results demonstrate that UTA can enhance CLIP models and outperform existing MIM methods on various uni- and multi-modal benchmarks. Code and models are available at https://github.com/jihaonew/UTA.
|
1602.08128
|
Zhenhao Ge
|
Zhenhao Ge, Sudhendu R. Sharma, Mark J. T. Smith
|
PCA Method for Automated Detection of Mispronounced Words
|
SPIE Defense, Security, and Sensing
| null |
10.1117/12.884155
| null |
cs.SD cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a method for detecting mispronunciations with the aim of
improving Computer Assisted Language Learning (CALL) tools used by foreign
language learners. The algorithm is based on Principle Component Analysis
(PCA). It is hierarchical with each successive step refining the estimate to
classify the test word as being either mispronounced or correct. Preprocessing
before detection, like normalization and time-scale modification, is
implemented to guarantee uniformity of the feature vectors input to the
detection system. The performance using various features including spectrograms
and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated.
Best results were obtained using MFCCs, achieving up to 99% accuracy in word
verification and 93% in native/non-native classification. Compared with Hidden
Markov Models (HMMs) which are used pervasively in recognition application,
this particular approach is computational efficient and effective when training
data is limited.
|
[
{
"created": "Thu, 25 Feb 2016 21:48:56 GMT",
"version": "v1"
}
] |
2016-02-29
|
[
[
"Ge",
"Zhenhao",
""
],
[
"Sharma",
"Sudhendu R.",
""
],
[
"Smith",
"Mark J. T.",
""
]
] |
This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which are used pervasively in recognition application, this particular approach is computational efficient and effective when training data is limited.
|
2307.10822
|
Wei Cong
|
Wei Cong, Yang Cong, Jiahua Dong, Gan Sun, Henghui Ding
|
Gradient-Semantic Compensation for Incremental Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incremental semantic segmentation aims to continually learn the segmentation
of new coming classes without accessing the training data of previously learned
classes. However, most current methods fail to address catastrophic forgetting
and background shift since they 1) treat all previous classes equally without
considering different forgetting paces caused by imbalanced gradient
back-propagation; 2) lack strong semantic guidance between classes. To tackle
the above challenges, in this paper, we propose a Gradient-Semantic
Compensation (GSC) model, which surmounts incremental semantic segmentation
from both gradient and semantic perspectives. Specifically, to address
catastrophic forgetting from the gradient aspect, we develop a step-aware
gradient compensation that can balance forgetting paces of previously seen
classes via re-weighting gradient backpropagation. Meanwhile, we propose a
soft-sharp semantic relation distillation to distill consistent inter-class
semantic relations via soft labels for alleviating catastrophic forgetting from
the semantic aspect. In addition, we develop a prototypical pseudo re-labeling
that provides strong semantic guidance to mitigate background shift. It
produces high-quality pseudo labels for old classes in the background by
measuring distances between pixels and class-wise prototypes. Extensive
experiments on three public datasets, i.e., Pascal VOC 2012, ADE20K, and
Cityscapes, demonstrate the effectiveness of our proposed GSC model.
|
[
{
"created": "Thu, 20 Jul 2023 12:32:25 GMT",
"version": "v1"
}
] |
2023-07-21
|
[
[
"Cong",
"Wei",
""
],
[
"Cong",
"Yang",
""
],
[
"Dong",
"Jiahua",
""
],
[
"Sun",
"Gan",
""
],
[
"Ding",
"Henghui",
""
]
] |
Incremental semantic segmentation aims to continually learn the segmentation of new coming classes without accessing the training data of previously learned classes. However, most current methods fail to address catastrophic forgetting and background shift since they 1) treat all previous classes equally without considering different forgetting paces caused by imbalanced gradient back-propagation; 2) lack strong semantic guidance between classes. To tackle the above challenges, in this paper, we propose a Gradient-Semantic Compensation (GSC) model, which surmounts incremental semantic segmentation from both gradient and semantic perspectives. Specifically, to address catastrophic forgetting from the gradient aspect, we develop a step-aware gradient compensation that can balance forgetting paces of previously seen classes via re-weighting gradient backpropagation. Meanwhile, we propose a soft-sharp semantic relation distillation to distill consistent inter-class semantic relations via soft labels for alleviating catastrophic forgetting from the semantic aspect. In addition, we develop a prototypical pseudo re-labeling that provides strong semantic guidance to mitigate background shift. It produces high-quality pseudo labels for old classes in the background by measuring distances between pixels and class-wise prototypes. Extensive experiments on three public datasets, i.e., Pascal VOC 2012, ADE20K, and Cityscapes, demonstrate the effectiveness of our proposed GSC model.
|
2403.17597
|
Luke Joel Dr
|
Luke Oluwaseye Joel and Sawyerr A. Babatunde and Adewumi O. Aderemi
|
An Exact Solution for Allocating Car Parking Spaces on Campus
|
An International Multidiscinary Conference on Research, Development
and Practices in Science, Technology, Education, Arts, Management & the
Social Science (iSTEAMS). Conference Centre, University of Ibandan, Nigeria.
30 May - 01 June 2013
| null | null | null |
cs.CE math.OC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
All over the world, especially in the university environment, planning
managers and traffic engineers are constantly faced with the problem of
inadequate allocation of car parking spaces to demanded users. Users could
either prefer reserved parking spaces to unreserved parking spaces or vice
versa. This makes the campus parking manager to be faced with two basic problem
which are: the problem of allocating the actual number of available reserved
spaces to users without any conflict over the same parking space, and the
problem of determining the number of parking permit to be issued for parking
lot with unreserved spaces. Hence, an optimal or available solution to the
problem is required. This paper investigates a model for allocating car parking
spaces, adds a constraint to address the reserved parking policy in a
university environment and solves the parking allocation problem using an exact
solution method. The result obtained gives the value of the objective function
and the optimal allocation of users to each parking lot.
|
[
{
"created": "Tue, 26 Mar 2024 11:08:48 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Joel",
"Luke Oluwaseye",
""
],
[
"Babatunde",
"Sawyerr A.",
""
],
[
"Aderemi",
"Adewumi O.",
""
]
] |
All over the world, especially in the university environment, planning managers and traffic engineers are constantly faced with the problem of inadequate allocation of car parking spaces to demanded users. Users could either prefer reserved parking spaces to unreserved parking spaces or vice versa. This makes the campus parking manager to be faced with two basic problem which are: the problem of allocating the actual number of available reserved spaces to users without any conflict over the same parking space, and the problem of determining the number of parking permit to be issued for parking lot with unreserved spaces. Hence, an optimal or available solution to the problem is required. This paper investigates a model for allocating car parking spaces, adds a constraint to address the reserved parking policy in a university environment and solves the parking allocation problem using an exact solution method. The result obtained gives the value of the objective function and the optimal allocation of users to each parking lot.
|
1111.0706
|
Gwendal Simon
|
Herve Kerivin, Jimmy Leblet, Gwendal Simon and Fen Zhou
|
Maximum Bounded Rooted-Tree Packing Problem
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a graph and a root, the Maximum Bounded Rooted-Tree Packing (MBRTP)
problem aims at finding K rooted-trees that span the largest subset of
vertices, when each vertex has a limited outdegree. This problem is motivated
by peer-to-peer streaming overlays in under-provisioned systems. We prove that
the MBRTP problem is NP-complete. We present two polynomial-time algorithms
that computes an optimal solution on complete graphs and trees respectively.
|
[
{
"created": "Thu, 3 Nov 2011 01:16:09 GMT",
"version": "v1"
}
] |
2011-11-04
|
[
[
"Kerivin",
"Herve",
""
],
[
"Leblet",
"Jimmy",
""
],
[
"Simon",
"Gwendal",
""
],
[
"Zhou",
"Fen",
""
]
] |
Given a graph and a root, the Maximum Bounded Rooted-Tree Packing (MBRTP) problem aims at finding K rooted-trees that span the largest subset of vertices, when each vertex has a limited outdegree. This problem is motivated by peer-to-peer streaming overlays in under-provisioned systems. We prove that the MBRTP problem is NP-complete. We present two polynomial-time algorithms that computes an optimal solution on complete graphs and trees respectively.
|
2301.09175
|
Tuan Manh Lai
|
Tuan Manh Lai, Heng Ji
|
Ensemble Transfer Learning for Multilingual Coreference Resolution
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entity coreference resolution is an important research problem with many
applications, including information extraction and question answering.
Coreference resolution for English has been studied extensively. However, there
is relatively little work for other languages. A problem that frequently occurs
when working with a non-English language is the scarcity of annotated training
data. To overcome this challenge, we design a simple but effective
ensemble-based framework that combines various transfer learning (TL)
techniques. We first train several models using different TL methods. Then,
during inference, we compute the unweighted average scores of the models'
predictions to extract the final set of predicted clusters. Furthermore, we
also propose a low-cost TL method that bootstraps coreference resolution models
by utilizing Wikipedia anchor texts. Leveraging the idea that the coreferential
links naturally exist between anchor texts pointing to the same article, our
method builds a sizeable distantly-supervised dataset for the target language
that consists of tens of thousands of documents. We can pre-train a model on
the pseudo-labeled dataset before finetuning it on the final target dataset.
Experimental results on two benchmark datasets, OntoNotes and SemEval, confirm
the effectiveness of our methods. Our best ensembles consistently outperform
the baseline approach of simple training by up to 7.68% in the F1 score. These
ensembles also achieve new state-of-the-art results for three languages:
Arabic, Dutch, and Spanish.
|
[
{
"created": "Sun, 22 Jan 2023 18:22:55 GMT",
"version": "v1"
}
] |
2023-01-24
|
[
[
"Lai",
"Tuan Manh",
""
],
[
"Ji",
"Heng",
""
]
] |
Entity coreference resolution is an important research problem with many applications, including information extraction and question answering. Coreference resolution for English has been studied extensively. However, there is relatively little work for other languages. A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data. To overcome this challenge, we design a simple but effective ensemble-based framework that combines various transfer learning (TL) techniques. We first train several models using different TL methods. Then, during inference, we compute the unweighted average scores of the models' predictions to extract the final set of predicted clusters. Furthermore, we also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts. Leveraging the idea that the coreferential links naturally exist between anchor texts pointing to the same article, our method builds a sizeable distantly-supervised dataset for the target language that consists of tens of thousands of documents. We can pre-train a model on the pseudo-labeled dataset before finetuning it on the final target dataset. Experimental results on two benchmark datasets, OntoNotes and SemEval, confirm the effectiveness of our methods. Our best ensembles consistently outperform the baseline approach of simple training by up to 7.68% in the F1 score. These ensembles also achieve new state-of-the-art results for three languages: Arabic, Dutch, and Spanish.
|
2104.01791
|
Saikat Dutta
|
Sourya Dipta Das, Ayan Basak, Saikat Dutta
|
A Heuristic-driven Uncertainty based Ensemble Framework for Fake News
Detection in Tweets and News Articles
|
Accepted to Neurocomputing
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The significance of social media has increased manifold in the past few
decades as it helps people from even the most remote corners of the world to
stay connected. With the advent of technology, digital media has become more
relevant and widely used than ever before and along with this, there has been a
resurgence in the circulation of fake news and tweets that demand immediate
attention. In this paper, we describe a novel Fake News Detection system that
automatically identifies whether a news item is "real" or "fake", as an
extension of our work in the CONSTRAINT COVID-19 Fake News Detection in English
challenge. We have used an ensemble model consisting of pre-trained models
followed by a statistical feature fusion network , along with a novel heuristic
algorithm by incorporating various attributes present in news items or tweets
like source, username handles, URL domains and authors as statistical feature.
Our proposed framework have also quantified reliable predictive uncertainty
along with proper class output confidence level for the classification task. We
have evaluated our results on the COVID-19 Fake News dataset and FakeNewsNet
dataset to show the effectiveness of the proposed algorithm on detecting fake
news in short news content as well as in news articles. We obtained a best
F1-score of 0.9892 on the COVID-19 dataset, and an F1-score of 0.9073 on the
FakeNewsNet dataset.
|
[
{
"created": "Mon, 5 Apr 2021 06:35:30 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Dec 2021 18:10:04 GMT",
"version": "v2"
}
] |
2021-12-14
|
[
[
"Das",
"Sourya Dipta",
""
],
[
"Basak",
"Ayan",
""
],
[
"Dutta",
"Saikat",
""
]
] |
The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world to stay connected. With the advent of technology, digital media has become more relevant and widely used than ever before and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention. In this paper, we describe a novel Fake News Detection system that automatically identifies whether a news item is "real" or "fake", as an extension of our work in the CONSTRAINT COVID-19 Fake News Detection in English challenge. We have used an ensemble model consisting of pre-trained models followed by a statistical feature fusion network , along with a novel heuristic algorithm by incorporating various attributes present in news items or tweets like source, username handles, URL domains and authors as statistical feature. Our proposed framework have also quantified reliable predictive uncertainty along with proper class output confidence level for the classification task. We have evaluated our results on the COVID-19 Fake News dataset and FakeNewsNet dataset to show the effectiveness of the proposed algorithm on detecting fake news in short news content as well as in news articles. We obtained a best F1-score of 0.9892 on the COVID-19 dataset, and an F1-score of 0.9073 on the FakeNewsNet dataset.
|
2403.10220
|
Xinli Hao
|
Xinli Hao, Yile Chen, Chen Yang, Zhihui Du, Chaohong Ma, Chao Wu,
Xiaofeng Meng
|
From Chaos to Clarity: Time Series Anomaly Detection in Astronomical
Observations
|
accepted by ICDE 2024
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of astronomical facilities, large-scale time series data
observed by these facilities is being collected. Analyzing anomalies in these
astronomical observations is crucial for uncovering potential celestial events
and physical phenomena, thus advancing the scientific research process.
However, existing time series anomaly detection methods fall short in tackling
the unique characteristics of astronomical observations where each star is
inherently independent but interfered by random concurrent noise, resulting in
a high rate of false alarms. To overcome the challenges, we propose AERO, a
novel two-stage framework tailored for unsupervised anomaly detection in
astronomical observations. In the first stage, we employ a Transformer-based
encoder-decoder architecture to learn the normal temporal patterns on each
variate (i.e., star) in alignment with the characteristic of variate
independence. In the second stage, we enhance the graph neural network with a
window-wise graph structure learning to tackle the occurrence of concurrent
noise characterized by spatial and temporal randomness. In this way, AERO is
not only capable of distinguishing normal temporal patterns from potential
anomalies but also effectively differentiating concurrent noise, thus
decreasing the number of false alarms. We conducted extensive experiments on
three synthetic datasets and three real-world datasets. The results demonstrate
that AERO outperforms the compared baselines. Notably, compared to the
state-of-the-art model, AERO improves the F1-score by up to 8.76% and 2.63% on
synthetic and real-world datasets respectively.
|
[
{
"created": "Fri, 15 Mar 2024 11:39:12 GMT",
"version": "v1"
}
] |
2024-03-18
|
[
[
"Hao",
"Xinli",
""
],
[
"Chen",
"Yile",
""
],
[
"Yang",
"Chen",
""
],
[
"Du",
"Zhihui",
""
],
[
"Ma",
"Chaohong",
""
],
[
"Wu",
"Chao",
""
],
[
"Meng",
"Xiaofeng",
""
]
] |
With the development of astronomical facilities, large-scale time series data observed by these facilities is being collected. Analyzing anomalies in these astronomical observations is crucial for uncovering potential celestial events and physical phenomena, thus advancing the scientific research process. However, existing time series anomaly detection methods fall short in tackling the unique characteristics of astronomical observations where each star is inherently independent but interfered by random concurrent noise, resulting in a high rate of false alarms. To overcome the challenges, we propose AERO, a novel two-stage framework tailored for unsupervised anomaly detection in astronomical observations. In the first stage, we employ a Transformer-based encoder-decoder architecture to learn the normal temporal patterns on each variate (i.e., star) in alignment with the characteristic of variate independence. In the second stage, we enhance the graph neural network with a window-wise graph structure learning to tackle the occurrence of concurrent noise characterized by spatial and temporal randomness. In this way, AERO is not only capable of distinguishing normal temporal patterns from potential anomalies but also effectively differentiating concurrent noise, thus decreasing the number of false alarms. We conducted extensive experiments on three synthetic datasets and three real-world datasets. The results demonstrate that AERO outperforms the compared baselines. Notably, compared to the state-of-the-art model, AERO improves the F1-score by up to 8.76% and 2.63% on synthetic and real-world datasets respectively.
|
2008.08071
|
Lunjia Hu
|
Lunjia Hu, Omer Reingold
|
Robust Mean Estimation on Highly Incomplete Data with Arbitrary Outliers
|
29 pages, 2 figures. Published in AISTATS 2021. More details in the
proof of Claim 14
| null | null | null |
cs.DS cs.LG math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of robustly estimating the mean of a $d$-dimensional
distribution given $N$ examples, where most coordinates of every example may be
missing and $\varepsilon N$ examples may be arbitrarily corrupted. Assuming
each coordinate appears in a constant factor more than $\varepsilon N$
examples, we show algorithms that estimate the mean of the distribution with
information-theoretically optimal dimension-independent error guarantees in
nearly-linear time $\widetilde O(Nd)$. Our results extend recent work on
computationally-efficient robust estimation to a more widely applicable
incomplete-data setting.
|
[
{
"created": "Tue, 18 Aug 2020 17:53:34 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Aug 2020 07:50:25 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Mar 2021 01:13:12 GMT",
"version": "v3"
},
{
"created": "Sat, 6 Mar 2021 19:39:54 GMT",
"version": "v4"
},
{
"created": "Mon, 3 May 2021 04:25:56 GMT",
"version": "v5"
}
] |
2021-05-04
|
[
[
"Hu",
"Lunjia",
""
],
[
"Reingold",
"Omer",
""
]
] |
We study the problem of robustly estimating the mean of a $d$-dimensional distribution given $N$ examples, where most coordinates of every example may be missing and $\varepsilon N$ examples may be arbitrarily corrupted. Assuming each coordinate appears in a constant factor more than $\varepsilon N$ examples, we show algorithms that estimate the mean of the distribution with information-theoretically optimal dimension-independent error guarantees in nearly-linear time $\widetilde O(Nd)$. Our results extend recent work on computationally-efficient robust estimation to a more widely applicable incomplete-data setting.
|
1811.11611
|
Joakim Johnander
|
Joakim Johnander, Martin Danelljan, Emil Brissman, Fahad Shahbaz Khan,
Michael Felsberg
|
A Generative Appearance Model for End-to-end Video Object Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the fundamental challenges in video object segmentation is to find an
effective representation of the target and background appearance. The best
performing approaches resort to extensive fine-tuning of a convolutional neural
network for this purpose. Besides being prohibitively expensive, this strategy
cannot be truly trained end-to-end since the online fine-tuning procedure is
not integrated into the offline training of the network.
To address these issues, we propose a network architecture that learns a
powerful representation of the target and background appearance in a single
forward pass. The introduced appearance module learns a probabilistic
generative model of target and background feature distributions. Given a new
image, it predicts the posterior class probabilities, providing a highly
discriminative cue, which is processed in later network modules. Both the
learning and prediction stages of our appearance module are fully
differentiable, enabling true end-to-end training of the entire segmentation
pipeline. Comprehensive experiments demonstrate the effectiveness of the
proposed approach on three video object segmentation benchmarks. We close the
gap to approaches based on online fine-tuning on DAVIS17, while operating at 15
FPS on a single GPU. Furthermore, our method outperforms all published
approaches on the large-scale YouTube-VOS dataset.
|
[
{
"created": "Wed, 28 Nov 2018 15:11:43 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Dec 2018 13:50:46 GMT",
"version": "v2"
}
] |
2018-12-10
|
[
[
"Johnander",
"Joakim",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Brissman",
"Emil",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Felsberg",
"Michael",
""
]
] |
One of the fundamental challenges in video object segmentation is to find an effective representation of the target and background appearance. The best performing approaches resort to extensive fine-tuning of a convolutional neural network for this purpose. Besides being prohibitively expensive, this strategy cannot be truly trained end-to-end since the online fine-tuning procedure is not integrated into the offline training of the network. To address these issues, we propose a network architecture that learns a powerful representation of the target and background appearance in a single forward pass. The introduced appearance module learns a probabilistic generative model of target and background feature distributions. Given a new image, it predicts the posterior class probabilities, providing a highly discriminative cue, which is processed in later network modules. Both the learning and prediction stages of our appearance module are fully differentiable, enabling true end-to-end training of the entire segmentation pipeline. Comprehensive experiments demonstrate the effectiveness of the proposed approach on three video object segmentation benchmarks. We close the gap to approaches based on online fine-tuning on DAVIS17, while operating at 15 FPS on a single GPU. Furthermore, our method outperforms all published approaches on the large-scale YouTube-VOS dataset.
|
2307.01548
|
Hussam Ghanem
|
Hussam Ghanem (ICB), Massinissa Atmani (ICB), Christophe Cruz (ICB)
|
Knowledge Graph for NLG in the context of conversational agents
| null |
French Regional Conference on Complex Systems (FRCCS 2023), May
2023, Le Havre, France
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of knowledge graphs (KGs) enhances the accuracy and comprehensiveness
of the responses provided by a conversational agent. While generating answers
during conversations consists in generating text from these KGs, it is still
regarded as a challenging task that has gained significant attention in recent
years. In this document, we provide a review of different architectures used
for knowledge graph-to-text generation including: Graph Neural Networks, the
Graph Transformer, and linearization with seq2seq models. We discuss the
advantages and limitations of each architecture and conclude that the choice of
architecture will depend on the specific requirements of the task at hand. We
also highlight the importance of considering constraints such as execution time
and model validity, particularly in the context of conversational agents. Based
on these constraints and the availability of labeled data for the domains of
DAVI, we choose to use seq2seq Transformer-based models (PLMs) for the
Knowledge Graph-to-Text Generation task. We aim to refine benchmark datasets of
kg-to-text generation on PLMs and to explore the emotional and multilingual
dimensions in our future work. Overall, this review provides insights into the
different approaches for knowledge graph-to-text generation and outlines future
directions for research in this area.
|
[
{
"created": "Tue, 4 Jul 2023 08:03:33 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Ghanem",
"Hussam",
"",
"ICB"
],
[
"Atmani",
"Massinissa",
"",
"ICB"
],
[
"Cruz",
"Christophe",
"",
"ICB"
]
] |
The use of knowledge graphs (KGs) enhances the accuracy and comprehensiveness of the responses provided by a conversational agent. While generating answers during conversations consists in generating text from these KGs, it is still regarded as a challenging task that has gained significant attention in recent years. In this document, we provide a review of different architectures used for knowledge graph-to-text generation including: Graph Neural Networks, the Graph Transformer, and linearization with seq2seq models. We discuss the advantages and limitations of each architecture and conclude that the choice of architecture will depend on the specific requirements of the task at hand. We also highlight the importance of considering constraints such as execution time and model validity, particularly in the context of conversational agents. Based on these constraints and the availability of labeled data for the domains of DAVI, we choose to use seq2seq Transformer-based models (PLMs) for the Knowledge Graph-to-Text Generation task. We aim to refine benchmark datasets of kg-to-text generation on PLMs and to explore the emotional and multilingual dimensions in our future work. Overall, this review provides insights into the different approaches for knowledge graph-to-text generation and outlines future directions for research in this area.
|
2407.13479
|
Matthijs Ebbens
|
Matthijs Ebbens, Francis Lazarus
|
Computing the second and third systoles of a combinatorial surface
|
29 pages, 6 figures
| null | null | null |
cs.CG math.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Given a weighted, undirected graph $G$ cellularly embedded on a topological
surface $S$, we describe algorithms to compute the second shortest and third
shortest closed walks of $G$ that are homotopically non-trivial in $S$. Our
algorithms run in $O(n^2\log n)$ time for the second shortest walk and in
$O(n^3)$ time for the third shortest walk. We also show how to reduce the
running time for the second shortest homotopically non-trivial closed walk to
$O(n\log n)$ when both the genus and the number of boundaries are fixed.
Our algorithms rely on a careful analysis of the configurations of the first
three shortest homotopically non-trivial curves in $S$. As an intermediate
step, we also describe how to compute a shortest essential arc between
\emph{one} pair of vertices or between \emph{all} pairs of vertices of a given
boundary component of $S$ in $O(n^2)$ time or $O(n^3)$ time, respectively.
|
[
{
"created": "Thu, 18 Jul 2024 12:57:19 GMT",
"version": "v1"
}
] |
2024-07-19
|
[
[
"Ebbens",
"Matthijs",
""
],
[
"Lazarus",
"Francis",
""
]
] |
Given a weighted, undirected graph $G$ cellularly embedded on a topological surface $S$, we describe algorithms to compute the second shortest and third shortest closed walks of $G$ that are homotopically non-trivial in $S$. Our algorithms run in $O(n^2\log n)$ time for the second shortest walk and in $O(n^3)$ time for the third shortest walk. We also show how to reduce the running time for the second shortest homotopically non-trivial closed walk to $O(n\log n)$ when both the genus and the number of boundaries are fixed. Our algorithms rely on a careful analysis of the configurations of the first three shortest homotopically non-trivial curves in $S$. As an intermediate step, we also describe how to compute a shortest essential arc between \emph{one} pair of vertices or between \emph{all} pairs of vertices of a given boundary component of $S$ in $O(n^2)$ time or $O(n^3)$ time, respectively.
|
2012.14359
|
Mehul Bhatt
|
Jakob Suchan and Mehul Bhatt and Srikrishna Varadarajan
|
Commonsense Visual Sensemaking for Autonomous Driving: On Generalised
Neurosymbolic Online Abduction Integrating Vision and Semantics
|
This is a preprint / review version of an accepted contribution to be
published as part of the Artificial Intelligence Journal (AIJ).? The article
is an extended version of an IJCAI 2019 publication [74, arXiv:1906.00107]
| null | null | null |
cs.AI cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate the need and potential of systematically integrated vision and
semantics solutions for visual sensemaking in the backdrop of autonomous
driving. A general neurosymbolic method for online visual sensemaking using
answer set programming (ASP) is systematically formalised and fully
implemented. The method integrates state of the art in visual computing, and is
developed as a modular framework that is generally usable within hybrid
architectures for realtime perception and control. We evaluate and demonstrate
with community established benchmarks KITTIMOD, MOT-2017, and MOT-2020. As
use-case, we focus on the significance of human-centred visual sensemaking --
e.g., involving semantic representation and explainability, question-answering,
commonsense interpolation -- in safety-critical autonomous driving situations.
The developed neurosymbolic framework is domain-independent, with the case of
autonomous driving designed to serve as an exemplar for online visual
sensemaking in diverse cognitive interaction settings in the backdrop of select
human-centred AI technology design considerations.
Keywords: Cognitive Vision, Deep Semantics, Declarative Spatial Reasoning,
Knowledge Representation and Reasoning, Commonsense Reasoning, Visual
Abduction, Answer Set Programming, Autonomous Driving, Human-Centred Computing
and Design, Standardisation in Driving Technology, Spatial Cognition and AI.
|
[
{
"created": "Mon, 28 Dec 2020 16:55:19 GMT",
"version": "v1"
}
] |
2020-12-29
|
[
[
"Suchan",
"Jakob",
""
],
[
"Bhatt",
"Mehul",
""
],
[
"Varadarajan",
"Srikrishna",
""
]
] |
We demonstrate the need and potential of systematically integrated vision and semantics solutions for visual sensemaking in the backdrop of autonomous driving. A general neurosymbolic method for online visual sensemaking using answer set programming (ASP) is systematically formalised and fully implemented. The method integrates state of the art in visual computing, and is developed as a modular framework that is generally usable within hybrid architectures for realtime perception and control. We evaluate and demonstrate with community established benchmarks KITTIMOD, MOT-2017, and MOT-2020. As use-case, we focus on the significance of human-centred visual sensemaking -- e.g., involving semantic representation and explainability, question-answering, commonsense interpolation -- in safety-critical autonomous driving situations. The developed neurosymbolic framework is domain-independent, with the case of autonomous driving designed to serve as an exemplar for online visual sensemaking in diverse cognitive interaction settings in the backdrop of select human-centred AI technology design considerations. Keywords: Cognitive Vision, Deep Semantics, Declarative Spatial Reasoning, Knowledge Representation and Reasoning, Commonsense Reasoning, Visual Abduction, Answer Set Programming, Autonomous Driving, Human-Centred Computing and Design, Standardisation in Driving Technology, Spatial Cognition and AI.
|
2003.04761
|
Gleison Brito
|
Gleison Brito and Marco Tulio Valente
|
REST vs GraphQL: A Controlled Experiment
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
GraphQL is a novel query language for implementing service-based software
architectures. The language is gaining momentum and it is now used by major
software companies, such as Facebook and GitHub. However, we still lack
empirical evidence on the real gains achieved by GraphQL, particularly in terms
of the effort required to implement queries in this language. Therefore, in
this paper we describe a controlled experiment with 22 students (10
undergraduate and 12 graduate), who were asked to implement eight queries for
accessing a web service, using GraphQL and REST. Our results show that GraphQL
requires less effort to implement remote service queries when compared to REST
(9 vs 6 minutes, median times). These gains increase when REST queries include
more complex endpoints, with several parameters. Interestingly, GraphQL
outperforms REST even among more experienced participants (as is the case of
graduate students) and among participants with previous experience in REST, but
no previous experience in GraphQL.
|
[
{
"created": "Tue, 10 Mar 2020 14:17:39 GMT",
"version": "v1"
}
] |
2020-03-11
|
[
[
"Brito",
"Gleison",
""
],
[
"Valente",
"Marco Tulio",
""
]
] |
GraphQL is a novel query language for implementing service-based software architectures. The language is gaining momentum and it is now used by major software companies, such as Facebook and GitHub. However, we still lack empirical evidence on the real gains achieved by GraphQL, particularly in terms of the effort required to implement queries in this language. Therefore, in this paper we describe a controlled experiment with 22 students (10 undergraduate and 12 graduate), who were asked to implement eight queries for accessing a web service, using GraphQL and REST. Our results show that GraphQL requires less effort to implement remote service queries when compared to REST (9 vs 6 minutes, median times). These gains increase when REST queries include more complex endpoints, with several parameters. Interestingly, GraphQL outperforms REST even among more experienced participants (as is the case of graduate students) and among participants with previous experience in REST, but no previous experience in GraphQL.
|
1809.06367
|
Eugene Belilovsky
|
Edouard Oyallon (CVN, GALEN), Sergey Zagoruyko (ENPC, LIGM), Gabriel
Huang (DIRO, MILA), Nikos Komodakis (ENPC, CSD-UOC, LIGM), Simon
Lacoste-Julien (DIRO, MILA), Matthew Blaschko (ESAT), Eugene Belilovsky
(DIRO, MILA)
|
Scattering Networks for Hybrid Representation Learning
|
arXiv admin note: substantial text overlap with arXiv:1703.08961
|
IEEE Transactions on Pattern Analysis and Machine Intelligence,
Institute of Electrical and Electronics Engineers, 2018, pp.11
|
10.1109/TPAMI.2018.2855738
| null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scattering networks are a class of designed Convolutional Neural Networks
(CNNs) with fixed weights. We argue they can serve as generic representations
for modelling images. In particular, by working in scattering space, we achieve
competitive results both for supervised and unsupervised learning tasks, while
making progress towards constructing more interpretable CNNs. For supervised
learning, we demonstrate that the early layers of CNNs do not necessarily need
to be learned, and can be replaced with a scattering network instead. Indeed,
using hybrid architectures, we achieve the best results with predefined
representations to-date, while being competitive with end-to-end learned CNNs.
Specifically, even applying a shallow cascade of small-windowed scattering
coefficients followed by 1$\times$1-convolutions results in AlexNet accuracy on
the ILSVRC2012 classification task. Moreover, by combining scattering networks
with deep residual networks, we achieve a single-crop top-5 error of 11.4% on
ILSVRC2012. Also, we show they can yield excellent performance in the small
sample regime on CIFAR-10 and STL-10 datasets, exceeding their end-to-end
counterparts, through their ability to incorporate geometrical priors. For
unsupervised learning, scattering coefficients can be a competitive
representation that permits image recovery. We use this fact to train hybrid
GANs to generate images. Finally, we empirically analyze several properties
related to stability and reconstruction of images from scattering coefficients.
|
[
{
"created": "Mon, 17 Sep 2018 06:27:40 GMT",
"version": "v1"
}
] |
2018-09-19
|
[
[
"Oyallon",
"Edouard",
"",
"CVN, GALEN"
],
[
"Zagoruyko",
"Sergey",
"",
"ENPC, LIGM"
],
[
"Huang",
"Gabriel",
"",
"DIRO, MILA"
],
[
"Komodakis",
"Nikos",
"",
"ENPC, CSD-UOC, LIGM"
],
[
"Lacoste-Julien",
"Simon",
"",
"DIRO, MILA"
],
[
"Blaschko",
"Matthew",
"",
"ESAT"
],
[
"Belilovsky",
"Eugene",
"",
"DIRO, MILA"
]
] |
Scattering networks are a class of designed Convolutional Neural Networks (CNNs) with fixed weights. We argue they can serve as generic representations for modelling images. In particular, by working in scattering space, we achieve competitive results both for supervised and unsupervised learning tasks, while making progress towards constructing more interpretable CNNs. For supervised learning, we demonstrate that the early layers of CNNs do not necessarily need to be learned, and can be replaced with a scattering network instead. Indeed, using hybrid architectures, we achieve the best results with predefined representations to-date, while being competitive with end-to-end learned CNNs. Specifically, even applying a shallow cascade of small-windowed scattering coefficients followed by 1$\times$1-convolutions results in AlexNet accuracy on the ILSVRC2012 classification task. Moreover, by combining scattering networks with deep residual networks, we achieve a single-crop top-5 error of 11.4% on ILSVRC2012. Also, we show they can yield excellent performance in the small sample regime on CIFAR-10 and STL-10 datasets, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. For unsupervised learning, scattering coefficients can be a competitive representation that permits image recovery. We use this fact to train hybrid GANs to generate images. Finally, we empirically analyze several properties related to stability and reconstruction of images from scattering coefficients.
|
2205.08247
|
Joao Monteiro
|
Joao Monteiro, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Greg Mori
|
Monotonicity Regularization: Improved Penalties and Novel Applications
to Disentangled Representation Learning and Robust Classification
|
Accepted to UAI 2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We study settings where gradient penalties are used alongside risk
minimization with the goal of obtaining predictors satisfying different notions
of monotonicity. Specifically, we present two sets of contributions. In the
first part of the paper, we show that different choices of penalties define the
regions of the input space where the property is observed. As such, previous
methods result in models that are monotonic only in a small volume of the input
space. We thus propose an approach that uses mixtures of training instances and
random points to populate the space and enforce the penalty in a much larger
region. As a second set of contributions, we introduce regularization
strategies that enforce other notions of monotonicity in different settings. In
this case, we consider applications, such as image classification and
generative modeling, where monotonicity is not a hard constraint but can help
improve some aspects of the model. Namely, we show that inducing monotonicity
can be beneficial in applications such as: (1) allowing for controllable data
generation, (2) defining strategies to detect anomalous data, and (3)
generating explanations for predictions. Our proposed approaches do not
introduce relevant computational overhead while leading to efficient procedures
that provide extra benefits over baseline models.
|
[
{
"created": "Tue, 17 May 2022 11:42:45 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Monteiro",
"Joao",
""
],
[
"Ahmed",
"Mohamed Osama",
""
],
[
"Hajimirsadeghi",
"Hossein",
""
],
[
"Mori",
"Greg",
""
]
] |
We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity. Specifically, we present two sets of contributions. In the first part of the paper, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second set of contributions, we introduce regularization strategies that enforce other notions of monotonicity in different settings. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that inducing monotonicity can be beneficial in applications such as: (1) allowing for controllable data generation, (2) defining strategies to detect anomalous data, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.
|
1909.04242
|
Guanhua Zhang
|
Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, Tiejun
Zhao
|
Mitigating Annotation Artifacts in Natural Language Inference Datasets
to Improve Cross-dataset Generalization Ability
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language inference (NLI) aims at predicting the relationship between
a given pair of premise and hypothesis. However, several works have found that
there widely exists a bias pattern called annotation artifacts in NLI datasets,
making it possible to identify the label only by looking at the hypothesis.
This irregularity makes the evaluation results over-estimated and affects
models' generalization ability. In this paper, we consider a more trust-worthy
setting, i.e., cross-dataset evaluation. We explore the impacts of annotation
artifacts in cross-dataset testing. Furthermore, we propose a training
framework to mitigate the impacts of the bias pattern. Experimental results
demonstrate that our methods can alleviate the negative effect of the artifacts
and improve the generalization ability of models.
|
[
{
"created": "Tue, 10 Sep 2019 02:35:34 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Oct 2019 14:40:31 GMT",
"version": "v2"
}
] |
2019-10-08
|
[
[
"Zhang",
"Guanhua",
""
],
[
"Bai",
"Bing",
""
],
[
"Zhang",
"Junqi",
""
],
[
"Bai",
"Kun",
""
],
[
"Zhu",
"Conghui",
""
],
[
"Zhao",
"Tiejun",
""
]
] |
Natural language inference (NLI) aims at predicting the relationship between a given pair of premise and hypothesis. However, several works have found that there widely exists a bias pattern called annotation artifacts in NLI datasets, making it possible to identify the label only by looking at the hypothesis. This irregularity makes the evaluation results over-estimated and affects models' generalization ability. In this paper, we consider a more trust-worthy setting, i.e., cross-dataset evaluation. We explore the impacts of annotation artifacts in cross-dataset testing. Furthermore, we propose a training framework to mitigate the impacts of the bias pattern. Experimental results demonstrate that our methods can alleviate the negative effect of the artifacts and improve the generalization ability of models.
|
1807.10543
|
Neslihan Suzen
|
Neslihan Suzen, Alexander Gorban, Jeremy Levesley and Evgeny Mirkes
|
Automatic Short Answer Grading and Feedback Using Text Mining Methods
|
27 pages; added questions for section 6; correction of typos
|
Procedia Computer Science 169 (2020), 726-743
|
10.1016/j.procs.2020.02.171
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic grading is not a new approach but the need to adapt the latest
technology to automatic grading has become very important. As the technology
has rapidly became more powerful on scoring exams and essays, especially from
the 1990s onwards, partially or wholly automated grading systems using
computational methods have evolved and have become a major area of research. In
particular, the demand of scoring of natural language responses has created a
need for tools that can be applied to automatically grade these responses. In
this paper, we focus on the concept of automatic grading of short answer
questions such as are typical in the UK GCSE system, and providing useful
feedback on their answers to students. We present experimental results on a
dataset provided from the introductory computer science class in the University
of North Texas. We first apply standard data mining techniques to the corpus of
student answers for the purpose of measuring similarity between the student
answers and the model answer. This is based on the number of common words. We
then evaluate the relation between these similarities and marks awarded by
scorers. We then consider an approach that groups student answers into
clusters. Each cluster would be awarded the same mark, and the same feedback
given to each answer in a cluster. In this manner, we demonstrate that clusters
indicate the groups of students who are awarded the same or the similar scores.
Words in each cluster are compared to show that clusters are constructed based
on how many and which words of the model answer have been used. The main
novelty in this paper is that we design a model to predict marks based on the
similarities between the student answers and the model answer.
|
[
{
"created": "Fri, 27 Jul 2018 12:00:21 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jun 2019 17:21:46 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Dec 2019 20:48:35 GMT",
"version": "v3"
}
] |
2020-04-20
|
[
[
"Suzen",
"Neslihan",
""
],
[
"Gorban",
"Alexander",
""
],
[
"Levesley",
"Jeremy",
""
],
[
"Mirkes",
"Evgeny",
""
]
] |
Automatic grading is not a new approach but the need to adapt the latest technology to automatic grading has become very important. As the technology has rapidly became more powerful on scoring exams and essays, especially from the 1990s onwards, partially or wholly automated grading systems using computational methods have evolved and have become a major area of research. In particular, the demand of scoring of natural language responses has created a need for tools that can be applied to automatically grade these responses. In this paper, we focus on the concept of automatic grading of short answer questions such as are typical in the UK GCSE system, and providing useful feedback on their answers to students. We present experimental results on a dataset provided from the introductory computer science class in the University of North Texas. We first apply standard data mining techniques to the corpus of student answers for the purpose of measuring similarity between the student answers and the model answer. This is based on the number of common words. We then evaluate the relation between these similarities and marks awarded by scorers. We then consider an approach that groups student answers into clusters. Each cluster would be awarded the same mark, and the same feedback given to each answer in a cluster. In this manner, we demonstrate that clusters indicate the groups of students who are awarded the same or the similar scores. Words in each cluster are compared to show that clusters are constructed based on how many and which words of the model answer have been used. The main novelty in this paper is that we design a model to predict marks based on the similarities between the student answers and the model answer.
|
1301.7345
|
Anirban Ghatak
|
Anirban Ghatak
|
Codes on Lattices for Random SAF Routing
|
17 pages, 1 figure; some typos corrected, a table of numerical data
added
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a construction of constant weight codes based on the unique
decomposition of elements in lattices is presented. The conditions for unique
primary decomposition and unique irreducible decomposition in lattices are
discussed and connections with the decomposition of ideals in Noetherian
commutative rings established. In this context it is shown, drawing on the
definitive works of Dilworth, Ward and others, that, as opposed to Noetherian
commutative rings, the existence of unique irreducible decomposition in
lattices does not guarantee unique primary decomposition. The source alphabet
in our proposed construction is a set of uniquely decomposable elements
constructed from a chosen subset of irreducible or primary elements of the
appropriate lattice. The distance function between two lattice elements is
based on the symmetric distance between sets of constituent elements. It is
known that constructing such constant weight codes is equivalent to
constructing a Johnson graph with appropriate parameters. Some bounds on the
code sizes are also presented and a method to obtain codes of optimal size,
utilizing the Johnson graph description of the codes, is discussed. As an
application we show how these codes can be used for error and erasure
correction in random networks employing store-and-forward (SAF) routing.
|
[
{
"created": "Wed, 30 Jan 2013 20:10:25 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Apr 2013 05:13:54 GMT",
"version": "v2"
}
] |
2013-04-30
|
[
[
"Ghatak",
"Anirban",
""
]
] |
In this paper, a construction of constant weight codes based on the unique decomposition of elements in lattices is presented. The conditions for unique primary decomposition and unique irreducible decomposition in lattices are discussed and connections with the decomposition of ideals in Noetherian commutative rings established. In this context it is shown, drawing on the definitive works of Dilworth, Ward and others, that, as opposed to Noetherian commutative rings, the existence of unique irreducible decomposition in lattices does not guarantee unique primary decomposition. The source alphabet in our proposed construction is a set of uniquely decomposable elements constructed from a chosen subset of irreducible or primary elements of the appropriate lattice. The distance function between two lattice elements is based on the symmetric distance between sets of constituent elements. It is known that constructing such constant weight codes is equivalent to constructing a Johnson graph with appropriate parameters. Some bounds on the code sizes are also presented and a method to obtain codes of optimal size, utilizing the Johnson graph description of the codes, is discussed. As an application we show how these codes can be used for error and erasure correction in random networks employing store-and-forward (SAF) routing.
|
2306.04841
|
Ha Thanh Nguyen
|
Thi-Hai-Yen Vuong, Ha-Thanh Nguyen, Quang-Huy Nguyen, Le-Minh Nguyen,
and Xuan-Hieu Phan
|
Improving Vietnamese Legal Question--Answering System based on Automatic
Data Enrichment
|
JURISIN 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Question answering (QA) in law is a challenging problem because legal
documents are much more complicated than normal texts in terms of terminology,
structure, and temporal and logical relationships. It is even more difficult to
perform legal QA for low-resource languages like Vietnamese where labeled data
are rare and pre-trained language models are still limited. In this paper, we
try to overcome these limitations by implementing a Vietnamese article-level
retrieval-based legal QA system and introduce a novel method to improve the
performance of language models by improving data quality through weak labeling.
Our hypothesis is that in contexts where labeled data are limited, efficient
data enrichment can help increase overall performance. Our experiments are
designed to test multiple aspects, which demonstrate the effectiveness of the
proposed technique.
|
[
{
"created": "Thu, 8 Jun 2023 00:24:29 GMT",
"version": "v1"
}
] |
2023-06-09
|
[
[
"Vuong",
"Thi-Hai-Yen",
""
],
[
"Nguyen",
"Ha-Thanh",
""
],
[
"Nguyen",
"Quang-Huy",
""
],
[
"Nguyen",
"Le-Minh",
""
],
[
"Phan",
"Xuan-Hieu",
""
]
] |
Question answering (QA) in law is a challenging problem because legal documents are much more complicated than normal texts in terms of terminology, structure, and temporal and logical relationships. It is even more difficult to perform legal QA for low-resource languages like Vietnamese where labeled data are rare and pre-trained language models are still limited. In this paper, we try to overcome these limitations by implementing a Vietnamese article-level retrieval-based legal QA system and introduce a novel method to improve the performance of language models by improving data quality through weak labeling. Our hypothesis is that in contexts where labeled data are limited, efficient data enrichment can help increase overall performance. Our experiments are designed to test multiple aspects, which demonstrate the effectiveness of the proposed technique.
|
0712.2630
|
Juan J. Merelo Pr.
|
Nestor Zorzano, Daniel Merino, J.L.J. Laredo, J.P. Sevilla, Pablo
Garcia, J.J. Merelo
|
Evolving XSLT stylesheets
|
First draft, preparing for WCCI 2008
| null | null | null |
cs.NE cs.PL
| null |
This paper introduces a procedure based on genetic programming to evolve XSLT
programs (usually called stylesheets or logicsheets). XSLT is a general
purpose, document-oriented functional language, generally used to transform XML
documents (or, in general, solve any problem that can be coded as an XML
document). The proposed solution uses a tree representation for the stylesheets
as well as diverse specific operators in order to obtain, in the studied cases
and a reasonable time, a XSLT stylesheet that performs the transformation.
Several types of representation have been compared, resulting in different
performance and degree of success.
|
[
{
"created": "Mon, 17 Dec 2007 19:59:42 GMT",
"version": "v1"
}
] |
2007-12-18
|
[
[
"Zorzano",
"Nestor",
""
],
[
"Merino",
"Daniel",
""
],
[
"Laredo",
"J. L. J.",
""
],
[
"Sevilla",
"J. P.",
""
],
[
"Garcia",
"Pablo",
""
],
[
"Merelo",
"J. J.",
""
]
] |
This paper introduces a procedure based on genetic programming to evolve XSLT programs (usually called stylesheets or logicsheets). XSLT is a general purpose, document-oriented functional language, generally used to transform XML documents (or, in general, solve any problem that can be coded as an XML document). The proposed solution uses a tree representation for the stylesheets as well as diverse specific operators in order to obtain, in the studied cases and a reasonable time, a XSLT stylesheet that performs the transformation. Several types of representation have been compared, resulting in different performance and degree of success.
|
1012.4113
|
Nadeem Javaid
|
Khaled Dridi, Nadeem Javaid, Karim Djouani, Boubaker Daachi
|
Performance Study of IEEE802.11e QoS in EDCF-Contention-based Static and
Dynamic Scenarios
|
4 Pages
|
2nd IEEE International Conference on Electronics, Circuits, and
Systems (ICECS)2009, Tunis
|
10.1109/ICECS.2009.5410754
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we carry-out a study of the Quality of Service (QoS) mechanism
in IEEE802.11e Enhanced Distribution Coordination Function (EDCF) and how it is
achieved by providing traffics with different priorities. It can perform the
access to the radio channel or just simply it can considerably be declined
subsequently to a variation of network dynamicity. The results of the proposed
analysis show that the EDCF scheduler looses the ability of the traffic
differentiation and becomes insensitive to the QoS priority requirements.
Consequently, it goes away from the region of stability and EDCF doesn't offer
better performance than the conventional DCF scheme. Therefore, traffic
specifications are weakly applied only for the channel occupation time
distribution. During the handoff between the Base Stations (BS's), the response
time of the data rate application within the roaming process grows to the
initial throughput level. Performance metrics at the MAC layer, like
throughput, End-2-End delay, and packet loss have been evaluated.
|
[
{
"created": "Sat, 18 Dec 2010 19:38:58 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Dridi",
"Khaled",
""
],
[
"Javaid",
"Nadeem",
""
],
[
"Djouani",
"Karim",
""
],
[
"Daachi",
"Boubaker",
""
]
] |
In this paper, we carry-out a study of the Quality of Service (QoS) mechanism in IEEE802.11e Enhanced Distribution Coordination Function (EDCF) and how it is achieved by providing traffics with different priorities. It can perform the access to the radio channel or just simply it can considerably be declined subsequently to a variation of network dynamicity. The results of the proposed analysis show that the EDCF scheduler looses the ability of the traffic differentiation and becomes insensitive to the QoS priority requirements. Consequently, it goes away from the region of stability and EDCF doesn't offer better performance than the conventional DCF scheme. Therefore, traffic specifications are weakly applied only for the channel occupation time distribution. During the handoff between the Base Stations (BS's), the response time of the data rate application within the roaming process grows to the initial throughput level. Performance metrics at the MAC layer, like throughput, End-2-End delay, and packet loss have been evaluated.
|
1406.2631
|
Mo Ghorbanzadeh
|
Mo Ghorbanzadeh, Ahmed Abdelhadi, Charles Clancy
|
A Utility Proportional Fairness Resource Allocation in Spectrally
Radar-Coexistent Cellular Networks
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spectrum sharing is an elegant solution to addressing the scarcity of the
bandwidth for wireless communications systems. This research studies the
feasibility of sharing the spectrum between sectorized cellular systems and
stationary radars interfering with certain sectors of the communications
infrastructure. It also explores allocating optimal resources to mobile devices
in order to provide with the quality of service for all running applications
whilst growing the communications network spectrally coexistent with the radar
systems. The rate allocation problem is formulated as two convex optimizations,
where the radar-interfering sector assignments are extracted from the portion
of the spectrum non-overlapping with the radar operating frequency. Such a
double-stage resource allocation procedure inherits the fairness into the rate
allocation scheme by first assigning the spectrally radar-overlapping
resources.
|
[
{
"created": "Mon, 9 Jun 2014 19:08:46 GMT",
"version": "v1"
}
] |
2014-06-11
|
[
[
"Ghorbanzadeh",
"Mo",
""
],
[
"Abdelhadi",
"Ahmed",
""
],
[
"Clancy",
"Charles",
""
]
] |
Spectrum sharing is an elegant solution to addressing the scarcity of the bandwidth for wireless communications systems. This research studies the feasibility of sharing the spectrum between sectorized cellular systems and stationary radars interfering with certain sectors of the communications infrastructure. It also explores allocating optimal resources to mobile devices in order to provide with the quality of service for all running applications whilst growing the communications network spectrally coexistent with the radar systems. The rate allocation problem is formulated as two convex optimizations, where the radar-interfering sector assignments are extracted from the portion of the spectrum non-overlapping with the radar operating frequency. Such a double-stage resource allocation procedure inherits the fairness into the rate allocation scheme by first assigning the spectrally radar-overlapping resources.
|
1203.4844
|
Ernest Kurniawan
|
Ernest Kurniawan, Andrea Goldsmith, and Stefano Rini
|
Practical Coding Schemes for Cognitive Overlay Radios
|
Patent pending
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop practical coding schemes for the cognitive overlay radios as
modeled by the cognitive interference channel, a variation of the classical two
user interference channel where one of the transmitters has knowledge of both
messages. Inspired by information theoretical results, we develop a coding
strategy for each of the three parameter regimes where capacity is known. A key
feature of the capacity achieving schemes in these regimes is the joint
decoding of both users' codewords, which we accomplish by performing a
posteriori probability calculation over a combined trellis. The schemes are
shown to perform close to the capacity limit with low error rate.
|
[
{
"created": "Wed, 21 Mar 2012 21:39:24 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jun 2012 23:42:45 GMT",
"version": "v2"
}
] |
2012-06-12
|
[
[
"Kurniawan",
"Ernest",
""
],
[
"Goldsmith",
"Andrea",
""
],
[
"Rini",
"Stefano",
""
]
] |
We develop practical coding schemes for the cognitive overlay radios as modeled by the cognitive interference channel, a variation of the classical two user interference channel where one of the transmitters has knowledge of both messages. Inspired by information theoretical results, we develop a coding strategy for each of the three parameter regimes where capacity is known. A key feature of the capacity achieving schemes in these regimes is the joint decoding of both users' codewords, which we accomplish by performing a posteriori probability calculation over a combined trellis. The schemes are shown to perform close to the capacity limit with low error rate.
|
2004.02220
|
Mohammad Reza Zarrabi
|
Mohammad Reza Zarrabi, Nasrollah Moghaddam Charkari
|
Query-points visibility constraint minimum link paths in simple polygons
| null |
Fundamenta Informaticae, Volume 182, Issue 3 (November 18, 2021)
fi:8386
|
10.3233/FI-2021-2075
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the query version of constrained minimum link paths between two
points inside a simple polygon $P$ with $n$ vertices such that there is at
least one point on the path, visible from a query point. The method is based on
partitioning $P$ into a number of faces of equal link distance from a point,
called a link-based shortest path map (SPM). Initially, we solve this problem
for two given points $s$, $t$ and a query point $q$. Then, the proposed
solution is extended to a general case for three arbitrary query points $s$,
$t$ and $q$. In the former, we propose an algorithm with $O(n)$ preprocessing
time. Extending this approach for the latter case, we develop an algorithm with
$O(n^3)$ preprocessing time. The link distance of a $q$-$visible$ path between
$s$, $t$ as well as the path are provided in time $O(\log n)$ and $O(m+\log
n)$, respectively, for the above two cases, where $m$ is the number of links.
|
[
{
"created": "Sun, 5 Apr 2020 14:47:17 GMT",
"version": "v1"
},
{
"created": "Tue, 5 May 2020 12:51:25 GMT",
"version": "v2"
},
{
"created": "Sun, 5 Jul 2020 09:14:27 GMT",
"version": "v3"
},
{
"created": "Mon, 25 Jan 2021 16:30:45 GMT",
"version": "v4"
},
{
"created": "Sat, 21 Aug 2021 13:52:17 GMT",
"version": "v5"
},
{
"created": "Thu, 28 Oct 2021 16:24:15 GMT",
"version": "v6"
}
] |
2023-06-22
|
[
[
"Zarrabi",
"Mohammad Reza",
""
],
[
"Charkari",
"Nasrollah Moghaddam",
""
]
] |
We study the query version of constrained minimum link paths between two points inside a simple polygon $P$ with $n$ vertices such that there is at least one point on the path, visible from a query point. The method is based on partitioning $P$ into a number of faces of equal link distance from a point, called a link-based shortest path map (SPM). Initially, we solve this problem for two given points $s$, $t$ and a query point $q$. Then, the proposed solution is extended to a general case for three arbitrary query points $s$, $t$ and $q$. In the former, we propose an algorithm with $O(n)$ preprocessing time. Extending this approach for the latter case, we develop an algorithm with $O(n^3)$ preprocessing time. The link distance of a $q$-$visible$ path between $s$, $t$ as well as the path are provided in time $O(\log n)$ and $O(m+\log n)$, respectively, for the above two cases, where $m$ is the number of links.
|
1911.00852
|
Masoud Mansoury
|
Masoud Mansoury, Himan Abdollahpouri, Joris Rombouts, Mykola
Pechenizkiy
|
The Relationship between the Consistency of Users' Ratings and
Recommendation Calibration
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairness in recommender systems has recently received attention from
researchers. Unfair recommendations have negative impact on the effectiveness
of recommender systems as it may degrade users' satisfaction, loyalty, and at
worst, it can lead to or perpetuate undesirable social dynamics. One of the
factors that may impact fairness is calibration, the degree to which users'
preferences on various item categories are reflected in the recommendations
they receive.
The ability of a recommendation algorithm for generating effective
recommendations may depend on the meaningfulness of the input data and the
amount of information available in users' profile. In this paper, we aim to
explore the relationship between the consistency of users' ratings behavior and
the degree of calibrated recommendations they receive. We conduct our analysis
on different groups of users based on the consistency of their ratings. Our
experimental results on a movie dataset and several recommendation algorithms
show that there is a positive correlation between the consistency of users'
ratings behavior and the degree of calibration in their recommendations,
meaning that user groups with higher inconsistency in their ratings receive
less calibrated recommendations.
|
[
{
"created": "Sun, 3 Nov 2019 08:10:33 GMT",
"version": "v1"
}
] |
2019-11-05
|
[
[
"Mansoury",
"Masoud",
""
],
[
"Abdollahpouri",
"Himan",
""
],
[
"Rombouts",
"Joris",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
Fairness in recommender systems has recently received attention from researchers. Unfair recommendations have negative impact on the effectiveness of recommender systems as it may degrade users' satisfaction, loyalty, and at worst, it can lead to or perpetuate undesirable social dynamics. One of the factors that may impact fairness is calibration, the degree to which users' preferences on various item categories are reflected in the recommendations they receive. The ability of a recommendation algorithm for generating effective recommendations may depend on the meaningfulness of the input data and the amount of information available in users' profile. In this paper, we aim to explore the relationship between the consistency of users' ratings behavior and the degree of calibrated recommendations they receive. We conduct our analysis on different groups of users based on the consistency of their ratings. Our experimental results on a movie dataset and several recommendation algorithms show that there is a positive correlation between the consistency of users' ratings behavior and the degree of calibration in their recommendations, meaning that user groups with higher inconsistency in their ratings receive less calibrated recommendations.
|
2112.08726
|
Peter West
|
Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel
Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A.
Smith, Yejin Choi
|
NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead
Heuristics
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The dominant paradigm for neural text generation is left-to-right decoding
from autoregressive language models. Constrained or controllable generation
under complex lexical constraints, however, requires foresight to plan ahead
feasible future paths.
Drawing inspiration from the A* search algorithm, we propose NeuroLogic
A*esque, a decoding algorithm that incorporates heuristic estimates of future
cost. We develop efficient lookahead heuristics that are efficient for
large-scale language models, making our method a drop-in replacement for common
techniques such as beam search and top-k sampling. To enable constrained
generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its
flexibility in incorporating logical constraints with A*esque estimates of
future constraint satisfaction.
Our approach outperforms competitive baselines on five generation tasks, and
achieves new state-of-the-art performance on table-to-text generation,
constrained machine translation, and keyword-constrained generation. The
improvements are particularly notable on tasks that require complex constraint
satisfaction or in few-shot or zero-shot settings. NeuroLogic A*esque
illustrates the power of decoding for improving and enabling new capabilities
of large-scale language models.
|
[
{
"created": "Thu, 16 Dec 2021 09:22:54 GMT",
"version": "v1"
}
] |
2021-12-17
|
[
[
"Lu",
"Ximing",
""
],
[
"Welleck",
"Sean",
""
],
[
"West",
"Peter",
""
],
[
"Jiang",
"Liwei",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Khashabi",
"Daniel",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Qin",
"Lianhui",
""
],
[
"Yu",
"Youngjae",
""
],
[
"Zellers",
"Rowan",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Choi",
"Yejin",
""
]
] |
The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however, requires foresight to plan ahead feasible future paths. Drawing inspiration from the A* search algorithm, we propose NeuroLogic A*esque, a decoding algorithm that incorporates heuristic estimates of future cost. We develop efficient lookahead heuristics that are efficient for large-scale language models, making our method a drop-in replacement for common techniques such as beam search and top-k sampling. To enable constrained generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its flexibility in incorporating logical constraints with A*esque estimates of future constraint satisfaction. Our approach outperforms competitive baselines on five generation tasks, and achieves new state-of-the-art performance on table-to-text generation, constrained machine translation, and keyword-constrained generation. The improvements are particularly notable on tasks that require complex constraint satisfaction or in few-shot or zero-shot settings. NeuroLogic A*esque illustrates the power of decoding for improving and enabling new capabilities of large-scale language models.
|
1412.0315
|
Guy Van den Broeck
|
Guy Van den Broeck and Mathias Niepert
|
Lifted Probabilistic Inference for Asymmetric Graphical Models
|
To appear in Proceedings of AAAI-2015
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lifted probabilistic inference algorithms have been successfully applied to a
large number of symmetric graphical models. Unfortunately, the majority of
real-world graphical models is asymmetric. This is even the case for relational
representations when evidence is given. Therefore, more recent work in the
community moved to making the models symmetric and then applying existing
lifted inference algorithms. However, this approach has two shortcomings.
First, all existing over-symmetric approximations require a relational
representation such as Markov logic networks. Second, the induced symmetries
often change the distribution significantly, making the computed probabilities
highly biased. We present a framework for probabilistic sampling-based
inference that only uses the induced approximate symmetries to propose steps in
a Metropolis-Hastings style Markov chain. The framework, therefore, leads to
improved probability estimates while remaining unbiased. Experiments
demonstrate that the approach outperforms existing MCMC algorithms.
|
[
{
"created": "Mon, 1 Dec 2014 00:40:33 GMT",
"version": "v1"
}
] |
2014-12-02
|
[
[
"Broeck",
"Guy Van den",
""
],
[
"Niepert",
"Mathias",
""
]
] |
Lifted probabilistic inference algorithms have been successfully applied to a large number of symmetric graphical models. Unfortunately, the majority of real-world graphical models is asymmetric. This is even the case for relational representations when evidence is given. Therefore, more recent work in the community moved to making the models symmetric and then applying existing lifted inference algorithms. However, this approach has two shortcomings. First, all existing over-symmetric approximations require a relational representation such as Markov logic networks. Second, the induced symmetries often change the distribution significantly, making the computed probabilities highly biased. We present a framework for probabilistic sampling-based inference that only uses the induced approximate symmetries to propose steps in a Metropolis-Hastings style Markov chain. The framework, therefore, leads to improved probability estimates while remaining unbiased. Experiments demonstrate that the approach outperforms existing MCMC algorithms.
|
1611.07629
|
EPTCS
|
Grigory Fedyukovich (UW), Rastislav Bod\'ik (UW)
|
Approaching Symbolic Parallelization by Synthesis of Recurrence
Decompositions
|
In Proceedings SYNT 2016, arXiv:1611.07178
|
EPTCS 229, 2016, pp. 55-66
|
10.4204/EPTCS.229.6
| null |
cs.PL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present GraSSP, a novel approach to perform automated parallelization
relying on recent advances in formal verification and synthesis. GraSSP
augments an existing sequential program with an additional functionality to
decompose data dependencies in loop iterations, to compute partial results, and
to compose them together. We show that for some classes of the sequential
prefix sum problems, such parallelization can be performed efficiently.
|
[
{
"created": "Wed, 23 Nov 2016 03:28:09 GMT",
"version": "v1"
}
] |
2016-11-24
|
[
[
"Fedyukovich",
"Grigory",
"",
"UW"
],
[
"Bodík",
"Rastislav",
"",
"UW"
]
] |
We present GraSSP, a novel approach to perform automated parallelization relying on recent advances in formal verification and synthesis. GraSSP augments an existing sequential program with an additional functionality to decompose data dependencies in loop iterations, to compute partial results, and to compose them together. We show that for some classes of the sequential prefix sum problems, such parallelization can be performed efficiently.
|
2004.10024
|
Haitian Zheng
|
Haitian Zheng, Haofu Liao, Lele Chen, Wei Xiong, Tianlang Chen, Jiebo
Luo
|
Example-Guided Image Synthesis across Arbitrary Scenes using Masked
Spatial-Channel Attention and Self-Supervision
|
24 pages. arXiv admin note: substantial text overlap with
arXiv:1911.12362
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Example-guided image synthesis has recently been attempted to synthesize an
image from a semantic label map and an exemplary image. In the task, the
additional exemplar image provides the style guidance that controls the
appearance of the synthesized output. Despite the controllability advantage,
the existing models are designed on datasets with specific and roughly aligned
objects. In this paper, we tackle a more challenging and general task, where
the exemplar is an arbitrary scene image that is semantically different from
the given label map. To this end, we first propose a Masked Spatial-Channel
Attention (MSCA) module which models the correspondence between two arbitrary
scenes via efficient decoupled attention. Next, we propose an end-to-end
network for joint global and local feature alignment and synthesis. Finally, we
propose a novel self-supervision task to enable training. Experiments on the
large-scale and more diverse COCO-stuff dataset show significant improvements
over the existing methods. Moreover, our approach provides interpretability and
can be readily extended to other content manipulation tasks including style and
spatial interpolation or extrapolation.
|
[
{
"created": "Sat, 18 Apr 2020 18:17:40 GMT",
"version": "v1"
}
] |
2020-04-22
|
[
[
"Zheng",
"Haitian",
""
],
[
"Liao",
"Haofu",
""
],
[
"Chen",
"Lele",
""
],
[
"Xiong",
"Wei",
""
],
[
"Chen",
"Tianlang",
""
],
[
"Luo",
"Jiebo",
""
]
] |
Example-guided image synthesis has recently been attempted to synthesize an image from a semantic label map and an exemplary image. In the task, the additional exemplar image provides the style guidance that controls the appearance of the synthesized output. Despite the controllability advantage, the existing models are designed on datasets with specific and roughly aligned objects. In this paper, we tackle a more challenging and general task, where the exemplar is an arbitrary scene image that is semantically different from the given label map. To this end, we first propose a Masked Spatial-Channel Attention (MSCA) module which models the correspondence between two arbitrary scenes via efficient decoupled attention. Next, we propose an end-to-end network for joint global and local feature alignment and synthesis. Finally, we propose a novel self-supervision task to enable training. Experiments on the large-scale and more diverse COCO-stuff dataset show significant improvements over the existing methods. Moreover, our approach provides interpretability and can be readily extended to other content manipulation tasks including style and spatial interpolation or extrapolation.
|
cs/0109014
|
Janet van der Linden
|
Janet van der Linden (The Open University)
|
Assigning Satisfaction Values to Constraints: An Algorithm to Solve
Dynamic Meta-Constraints
|
11 pages. Proceedings ERCIM WG on Constraints (Prague, June 2001)
| null | null | null |
cs.PL cs.AI
| null |
The model of Dynamic Meta-Constraints has special activity constraints which
can activate other constraints. It also has meta-constraints which range over
other constraints. An algorithm is presented in which constraints can be
assigned one of five different satisfaction values, which leads to the
assignment of domain values to the variables in the CSP. An outline of the
model and the algorithm is presented, followed by some initial results for two
problems: a simple classic CSP and the Car Configuration Problem. The algorithm
is shown to perform few backtracks per solution, but to have overheads in the
form of historical records required for the implementation of state.
|
[
{
"created": "Thu, 13 Sep 2001 11:03:18 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"van der Linden",
"Janet",
"",
"The Open University"
]
] |
The model of Dynamic Meta-Constraints has special activity constraints which can activate other constraints. It also has meta-constraints which range over other constraints. An algorithm is presented in which constraints can be assigned one of five different satisfaction values, which leads to the assignment of domain values to the variables in the CSP. An outline of the model and the algorithm is presented, followed by some initial results for two problems: a simple classic CSP and the Car Configuration Problem. The algorithm is shown to perform few backtracks per solution, but to have overheads in the form of historical records required for the implementation of state.
|
1911.08217
|
Huizhou Li
|
Chao Yang, Huizhou Li, Fangting Lin, Bin Jiang, Hao Zhao
|
Constrained R-CNN: A general image manipulation detection model
|
Accepted to IEEE International Conference on Multimedia and Expo
(ICME2020)
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep learning-based models have exhibited remarkable performance
for image manipulation detection. However, most of them suffer from poor
universality of handcrafted or predetermined features. Meanwhile, they only
focus on manipulation localization and overlook manipulation classification. To
address these issues, we propose a coarse-to-fine architecture named
Constrained R-CNN for complete and accurate image forensics. First, the
learnable manipulation feature extractor learns a unified feature
representation directly from data. Second, the attention region proposal
network effectively discriminates manipulated regions for the next manipulation
classification and coarse localization. Then, the skip structure fuses
low-level and high-level information to refine the global manipulation
features. Finally, the coarse localization information guides the model to
further learn the finer local features and segment out the tampered region.
Experimental results show that our model achieves state-of-the-art performance.
Especially, the F1 score is increased by 28.4%, 73.2%, 13.3% on the NIST16,
COVERAGE, and Columbia dataset.
|
[
{
"created": "Tue, 19 Nov 2019 12:12:20 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Nov 2019 12:14:58 GMT",
"version": "v2"
},
{
"created": "Sun, 15 Mar 2020 11:01:38 GMT",
"version": "v3"
}
] |
2020-03-17
|
[
[
"Yang",
"Chao",
""
],
[
"Li",
"Huizhou",
""
],
[
"Lin",
"Fangting",
""
],
[
"Jiang",
"Bin",
""
],
[
"Zhao",
"Hao",
""
]
] |
Recently, deep learning-based models have exhibited remarkable performance for image manipulation detection. However, most of them suffer from poor universality of handcrafted or predetermined features. Meanwhile, they only focus on manipulation localization and overlook manipulation classification. To address these issues, we propose a coarse-to-fine architecture named Constrained R-CNN for complete and accurate image forensics. First, the learnable manipulation feature extractor learns a unified feature representation directly from data. Second, the attention region proposal network effectively discriminates manipulated regions for the next manipulation classification and coarse localization. Then, the skip structure fuses low-level and high-level information to refine the global manipulation features. Finally, the coarse localization information guides the model to further learn the finer local features and segment out the tampered region. Experimental results show that our model achieves state-of-the-art performance. Especially, the F1 score is increased by 28.4%, 73.2%, 13.3% on the NIST16, COVERAGE, and Columbia dataset.
|
1810.04329
|
Muhammad Hilman
|
Muhammad H. Hilman and Maria A. Rodriguez and Rajkumar Buyya
|
Task Runtime Prediction in Scientific Workflows Using an Online
Incremental Learning Approach
|
Accepted for presentation at main conference track of 11th IEEE/ACM
International Conference on Utility and Cloud Computing
| null |
10.1109/UCC.2018.00018
| null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many algorithms in workflow scheduling and resource provisioning rely on the
performance estimation of tasks to produce a scheduling plan. A profiler that
is capable of modeling the execution of tasks and predicting their runtime
accurately, therefore, becomes an essential part of any Workflow Management
System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS)
platforms that use clouds for deploying scientific workflows, task runtime
prediction becomes more challenging because it requires the processing of a
significant amount of data in a near real-time scenario while dealing with the
performance variability of cloud resources. Hence, relying on methods such as
profiling tasks' execution data using basic statistical description (e.g.,
mean, standard deviation) or batch offline regression techniques to estimate
the runtime may not be suitable for such environments. In this paper, we
propose an online incremental learning approach to predict the runtime of tasks
in scientific workflows in clouds. To improve the performance of the
predictions, we harness fine-grained resources monitoring data in the form of
time-series records of CPU utilization, memory usage, and I/O activities that
are reflecting the unique characteristics of a task's execution. We compare our
solution to a state-of-the-art approach that exploits the resources monitoring
data based on regression machine learning technique. From our experiments, the
proposed strategy improves the performance, in terms of the error, up to
29.89%, compared to the state-of-the-art solutions.
|
[
{
"created": "Wed, 10 Oct 2018 01:59:08 GMT",
"version": "v1"
}
] |
2019-03-01
|
[
[
"Hilman",
"Muhammad H.",
""
],
[
"Rodriguez",
"Maria A.",
""
],
[
"Buyya",
"Rajkumar",
""
]
] |
Many algorithms in workflow scheduling and resource provisioning rely on the performance estimation of tasks to produce a scheduling plan. A profiler that is capable of modeling the execution of tasks and predicting their runtime accurately, therefore, becomes an essential part of any Workflow Management System (WMS). With the emergence of multi-tenant Workflow as a Service (WaaS) platforms that use clouds for deploying scientific workflows, task runtime prediction becomes more challenging because it requires the processing of a significant amount of data in a near real-time scenario while dealing with the performance variability of cloud resources. Hence, relying on methods such as profiling tasks' execution data using basic statistical description (e.g., mean, standard deviation) or batch offline regression techniques to estimate the runtime may not be suitable for such environments. In this paper, we propose an online incremental learning approach to predict the runtime of tasks in scientific workflows in clouds. To improve the performance of the predictions, we harness fine-grained resources monitoring data in the form of time-series records of CPU utilization, memory usage, and I/O activities that are reflecting the unique characteristics of a task's execution. We compare our solution to a state-of-the-art approach that exploits the resources monitoring data based on regression machine learning technique. From our experiments, the proposed strategy improves the performance, in terms of the error, up to 29.89%, compared to the state-of-the-art solutions.
|
2211.15944
|
Samuel Kessler
|
Samuel Kessler, Mateusz Ostaszewski, Micha{\l} Bortkiewicz, Mateusz
\.Zarski, Maciej Wo{\l}czyk, Jack Parker-Holder, Stephen J. Roberts and Piotr
Mi{\l}o\'s
|
The Effectiveness of World Models for Continual Reinforcement Learning
|
Accepted at CoLLAs 2023, 21 pages, 15 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
World models power some of the most efficient reinforcement learning
algorithms. In this work, we showcase that they can be harnessed for continual
learning - a situation when the agent faces changing environments. World models
typically employ a replay buffer for training, which can be naturally extended
to continual learning. We systematically study how different selective
experience replay methods affect performance, forgetting, and transfer. We also
provide recommendations regarding various modeling options for using world
models. The best set of choices is called Continual-Dreamer, it is
task-agnostic and utilizes the world model for continual exploration.
Continual-Dreamer is sample efficient and outperforms state-of-the-art
task-agnostic continual reinforcement learning methods on Minigrid and Minihack
benchmarks.
|
[
{
"created": "Tue, 29 Nov 2022 05:56:51 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2023 22:46:47 GMT",
"version": "v2"
}
] |
2023-07-14
|
[
[
"Kessler",
"Samuel",
""
],
[
"Ostaszewski",
"Mateusz",
""
],
[
"Bortkiewicz",
"Michał",
""
],
[
"Żarski",
"Mateusz",
""
],
[
"Wołczyk",
"Maciej",
""
],
[
"Parker-Holder",
"Jack",
""
],
[
"Roberts",
"Stephen J.",
""
],
[
"Miłoś",
"Piotr",
""
]
] |
World models power some of the most efficient reinforcement learning algorithms. In this work, we showcase that they can be harnessed for continual learning - a situation when the agent faces changing environments. World models typically employ a replay buffer for training, which can be naturally extended to continual learning. We systematically study how different selective experience replay methods affect performance, forgetting, and transfer. We also provide recommendations regarding various modeling options for using world models. The best set of choices is called Continual-Dreamer, it is task-agnostic and utilizes the world model for continual exploration. Continual-Dreamer is sample efficient and outperforms state-of-the-art task-agnostic continual reinforcement learning methods on Minigrid and Minihack benchmarks.
|
1907.00657
|
Jonathan Dong
|
Jonathan Dong, Mushegh Rafayelyan, Florent Krzakala, Sylvain Gigan
|
Optical Reservoir Computing using multiple light scattering for chaotic
systems prediction
| null |
IEEE Journal of Selected Topics in Quantum Electronics ( Volume:
26 , Issue: 1 , Jan.-Feb. 2020 )
|
10.1109/JSTQE.2019.2936281
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reservoir Computing is a relatively recent computational framework based on a
large Recurrent Neural Network with fixed weights. Many physical
implementations of Reservoir Computing have been proposed to improve speed and
energy efficiency. In this study, we report new advances in Optical Reservoir
Computing using multiple light scattering to accelerate the recursive
computation of the reservoir states. Two different spatial light modulation
technologies, namely, phase or binary amplitude modulations, are compared.
Phase modulation is a promising direction already employed in other photonic
implementations of Reservoir Computing. Additionally, we report a
Digital-Micromirror-based Reservoir Computing at up to 640 Hz, more than double
the previously reported frequency using a remotely controlled optical device
developed by LightOn, and present new binarization strategies to improve the
performance of binarized Reservoir Computing.
|
[
{
"created": "Mon, 1 Jul 2019 11:11:41 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2019 17:42:23 GMT",
"version": "v2"
}
] |
2019-09-10
|
[
[
"Dong",
"Jonathan",
""
],
[
"Rafayelyan",
"Mushegh",
""
],
[
"Krzakala",
"Florent",
""
],
[
"Gigan",
"Sylvain",
""
]
] |
Reservoir Computing is a relatively recent computational framework based on a large Recurrent Neural Network with fixed weights. Many physical implementations of Reservoir Computing have been proposed to improve speed and energy efficiency. In this study, we report new advances in Optical Reservoir Computing using multiple light scattering to accelerate the recursive computation of the reservoir states. Two different spatial light modulation technologies, namely, phase or binary amplitude modulations, are compared. Phase modulation is a promising direction already employed in other photonic implementations of Reservoir Computing. Additionally, we report a Digital-Micromirror-based Reservoir Computing at up to 640 Hz, more than double the previously reported frequency using a remotely controlled optical device developed by LightOn, and present new binarization strategies to improve the performance of binarized Reservoir Computing.
|
2405.09801
|
Zhiqi Li
|
Zhiqi Li, Barnab\'as B\"orcs\"ok, Duowen Chen, Yutong Sun, Bo Zhu,
Greg Turk
|
Lagrangian Covector Fluid with Free Surface
|
10 pages, 17 figures, SIGGRAPH Conference Papers '24
| null |
10.1145/3641519.3657514
| null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a novel Lagrangian fluid solver based on covector flow
maps. We aim to address the challenges of establishing a robust flow-map solver
for incompressible fluids under complex boundary conditions. Our key idea is to
use particle trajectories to establish precise flow maps and tailor path
integrals of physical quantities along these trajectories to reformulate the
Poisson problem during the projection step. We devise a decoupling mechanism
based on path-integral identities from flow-map theory. This mechanism
integrates long-range flow maps for the main fluid body into a short-range
projection framework, ensuring a robust treatment of free boundaries. We show
that our method can effectively transform a long-range projection problem with
integral boundaries into a Poisson problem with standard boundary conditions --
specifically, zero Dirichlet on the free surface and zero Neumann on solid
boundaries. This transformation significantly enhances robustness and accuracy,
extending the applicability of flow-map methods to complex free-surface
problems.
|
[
{
"created": "Thu, 16 May 2024 04:14:29 GMT",
"version": "v1"
}
] |
2024-05-17
|
[
[
"Li",
"Zhiqi",
""
],
[
"Börcsök",
"Barnabás",
""
],
[
"Chen",
"Duowen",
""
],
[
"Sun",
"Yutong",
""
],
[
"Zhu",
"Bo",
""
],
[
"Turk",
"Greg",
""
]
] |
This paper introduces a novel Lagrangian fluid solver based on covector flow maps. We aim to address the challenges of establishing a robust flow-map solver for incompressible fluids under complex boundary conditions. Our key idea is to use particle trajectories to establish precise flow maps and tailor path integrals of physical quantities along these trajectories to reformulate the Poisson problem during the projection step. We devise a decoupling mechanism based on path-integral identities from flow-map theory. This mechanism integrates long-range flow maps for the main fluid body into a short-range projection framework, ensuring a robust treatment of free boundaries. We show that our method can effectively transform a long-range projection problem with integral boundaries into a Poisson problem with standard boundary conditions -- specifically, zero Dirichlet on the free surface and zero Neumann on solid boundaries. This transformation significantly enhances robustness and accuracy, extending the applicability of flow-map methods to complex free-surface problems.
|
2306.00435
|
Chen Zhang
|
Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan
Zhao
|
How Many Answers Should I Give? An Empirical Study of Multi-Answer
Reading Comprehension
|
Findings of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The multi-answer phenomenon, where a question may have multiple answers
scattered in the document, can be well handled by humans but is challenging
enough for machine reading comprehension (MRC) systems. Despite recent progress
in multi-answer MRC, there lacks a systematic analysis of how this phenomenon
arises and how to better address it. In this work, we design a taxonomy to
categorize commonly-seen multi-answer MRC instances, with which we inspect
three multi-answer datasets and analyze where the multi-answer challenge comes
from. We further analyze how well different paradigms of current multi-answer
MRC models deal with different types of multi-answer instances. We find that
some paradigms capture well the key information in the questions while others
better model the relationship between questions and contexts. We thus explore
strategies to make the best of the strengths of different paradigms.
Experiments show that generation models can be a promising platform to
incorporate different paradigms. Our annotations and code are released for
further research.
|
[
{
"created": "Thu, 1 Jun 2023 08:22:21 GMT",
"version": "v1"
}
] |
2023-06-02
|
[
[
"Zhang",
"Chen",
""
],
[
"Lin",
"Jiuheng",
""
],
[
"Liu",
"Xiao",
""
],
[
"Lai",
"Yuxuan",
""
],
[
"Feng",
"Yansong",
""
],
[
"Zhao",
"Dongyan",
""
]
] |
The multi-answer phenomenon, where a question may have multiple answers scattered in the document, can be well handled by humans but is challenging enough for machine reading comprehension (MRC) systems. Despite recent progress in multi-answer MRC, there lacks a systematic analysis of how this phenomenon arises and how to better address it. In this work, we design a taxonomy to categorize commonly-seen multi-answer MRC instances, with which we inspect three multi-answer datasets and analyze where the multi-answer challenge comes from. We further analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances. We find that some paradigms capture well the key information in the questions while others better model the relationship between questions and contexts. We thus explore strategies to make the best of the strengths of different paradigms. Experiments show that generation models can be a promising platform to incorporate different paradigms. Our annotations and code are released for further research.
|
2105.02714
|
Dvir Ginzburg
|
Dvir Ginzburg and Dan Raviv
|
Deep Weighted Consensus: Dense correspondence confidence maps for 3D
shape registration
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new paradigm for rigid alignment between point clouds based on
learnable weighted consensus which is robust to noise as well as the full
spectrum of the rotation group.
Current models, learnable or axiomatic, work well for constrained
orientations and limited noise levels, usually by an end-to-end learner or an
iterative scheme. However, real-world tasks require us to deal with large
rotations as well as outliers and all known models fail to deliver.
Here we present a different direction. We claim that we can align point
clouds out of sampled matched points according to confidence level derived from
a dense, soft alignment map. The pipeline is differentiable, and converges
under large rotations in the full spectrum of SO(3), even with high noise
levels. We compared the network to recently presented methods such as DCP,
PointNetLK, RPM-Net, PRnet, and axiomatic methods such as ICP and Go-ICP. We
report here a fundamental boost in performance.
|
[
{
"created": "Thu, 6 May 2021 14:27:59 GMT",
"version": "v1"
}
] |
2021-05-07
|
[
[
"Ginzburg",
"Dvir",
""
],
[
"Raviv",
"Dan",
""
]
] |
We present a new paradigm for rigid alignment between point clouds based on learnable weighted consensus which is robust to noise as well as the full spectrum of the rotation group. Current models, learnable or axiomatic, work well for constrained orientations and limited noise levels, usually by an end-to-end learner or an iterative scheme. However, real-world tasks require us to deal with large rotations as well as outliers and all known models fail to deliver. Here we present a different direction. We claim that we can align point clouds out of sampled matched points according to confidence level derived from a dense, soft alignment map. The pipeline is differentiable, and converges under large rotations in the full spectrum of SO(3), even with high noise levels. We compared the network to recently presented methods such as DCP, PointNetLK, RPM-Net, PRnet, and axiomatic methods such as ICP and Go-ICP. We report here a fundamental boost in performance.
|
1912.10718
|
Fang Aiqing
|
Aiqing Fang, Xinbo Zhao, Yanning Zhang
|
Cross-Modal Image Fusion Theory Guided by Subjective Visual Attention
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The human visual perception system has very strong robustness and contextual
awareness in a variety of image processing tasks. This robustness and the
perception ability of contextual awareness is closely related to the
characteristics of multi-task auxiliary learning and subjective attention of
the human visual perception system. In order to improve the robustness and
contextual awareness of image fusion tasks, we proposed a multi-task auxiliary
learning image fusion theory guided by subjective attention. The image fusion
theory effectively unifies the subjective task intention and prior knowledge of
human brain. In order to achieve our proposed image fusion theory, we first
analyze the mechanism of multi-task auxiliary learning, build a multi-task
auxiliary learning network. Secondly, based on the human visual attention
perception mechanism, we introduce the human visual attention network guided by
subjective tasks on the basis of the multi-task auxiliary learning network. The
subjective intention is introduced by the subjective attention task model, so
that the network can fuse images according to the subjective intention.
Finally, in order to verify the superiority of our image fusion theory, we
carried out experiments on the combined vision system image data set, and the
infrared and visible image data set for experimental verification. The
experimental results demonstrate the superiority of our fusion theory over
state-of-arts in contextual awareness and robustness.
|
[
{
"created": "Mon, 23 Dec 2019 10:29:34 GMT",
"version": "v1"
}
] |
2019-12-24
|
[
[
"Fang",
"Aiqing",
""
],
[
"Zhao",
"Xinbo",
""
],
[
"Zhang",
"Yanning",
""
]
] |
The human visual perception system has very strong robustness and contextual awareness in a variety of image processing tasks. This robustness and the perception ability of contextual awareness is closely related to the characteristics of multi-task auxiliary learning and subjective attention of the human visual perception system. In order to improve the robustness and contextual awareness of image fusion tasks, we proposed a multi-task auxiliary learning image fusion theory guided by subjective attention. The image fusion theory effectively unifies the subjective task intention and prior knowledge of human brain. In order to achieve our proposed image fusion theory, we first analyze the mechanism of multi-task auxiliary learning, build a multi-task auxiliary learning network. Secondly, based on the human visual attention perception mechanism, we introduce the human visual attention network guided by subjective tasks on the basis of the multi-task auxiliary learning network. The subjective intention is introduced by the subjective attention task model, so that the network can fuse images according to the subjective intention. Finally, in order to verify the superiority of our image fusion theory, we carried out experiments on the combined vision system image data set, and the infrared and visible image data set for experimental verification. The experimental results demonstrate the superiority of our fusion theory over state-of-arts in contextual awareness and robustness.
|
cs/0302014
|
V. Sriram
|
Akshar Bharati, V.Sriram, A.Vamshi Krishna, Rajeev Sangal, S.M.Bendre
|
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical
Information
|
10 pages, 5 figures, Conference : International Conference on Natural
Language Processing ' 2002, Mumbai
| null | null | null |
cs.CL
| null |
In this paper we describe an algorithm for aligning sentences with their
translations in a bilingual corpus using lexical information of the languages.
Existing efficient algorithms ignore word identities and consider only the
sentence lengths (Brown, 1991; Gale and Church, 1993). For a sentence in the
source language text, the proposed algorithm picks the most likely translation
from the target language text using lexical information and certain heuristics.
It does not do statistical analysis using sentence lengths. The algorithm is
language independent. It also aids in detecting addition and deletion of text
in translations. The algorithm gives comparable results with the existing
algorithms in most of the cases while it does better in cases where statistical
algorithms do not give good results.
|
[
{
"created": "Wed, 12 Feb 2003 06:31:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Bharati",
"Akshar",
""
],
[
"Sriram",
"V.",
""
],
[
"Krishna",
"A. Vamshi",
""
],
[
"Sangal",
"Rajeev",
""
],
[
"Bendre",
"S. M.",
""
]
] |
In this paper we describe an algorithm for aligning sentences with their translations in a bilingual corpus using lexical information of the languages. Existing efficient algorithms ignore word identities and consider only the sentence lengths (Brown, 1991; Gale and Church, 1993). For a sentence in the source language text, the proposed algorithm picks the most likely translation from the target language text using lexical information and certain heuristics. It does not do statistical analysis using sentence lengths. The algorithm is language independent. It also aids in detecting addition and deletion of text in translations. The algorithm gives comparable results with the existing algorithms in most of the cases while it does better in cases where statistical algorithms do not give good results.
|
2401.12973
|
Ekin Aky\"urek
|
Ekin Aky\"urek, Bailin Wang, Yoon Kim, Jacob Andreas
|
In-Context Language Learning: Architectures and Algorithms
|
Fixes a typo in the title, and adds additional references
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale neural language models exhibit a remarkable capacity for
in-context learning (ICL): they can infer novel functions from datasets
provided as input. Most of our current understanding of when and how ICL arises
comes from LMs trained on extremely simple learning problems like linear
regression and associative recall. There remains a significant gap between
these model problems and the "real" ICL exhibited by LMs trained on large text
corpora, which involves not just retrieval and function approximation but
free-form generation of language and other structured outputs. In this paper,
we study ICL through the lens of a new family of model problems we term in
context language learning (ICLL). In ICLL, LMs are presented with a set of
strings from a formal language, and must generate additional strings from the
same language. We focus on in-context learning of regular languages generated
by random finite automata. We evaluate a diverse set of neural sequence models
(including several RNNs, Transformers, and state-space model variants) on
regular ICLL tasks, aiming to answer three questions: (1) Which model classes
are empirically capable of ICLL? (2) What algorithmic solutions do successful
models implement to perform ICLL? (3) What architectural changes can improve
ICLL in less performant models? We first show that Transformers significantly
outperform neural sequence models with recurrent or convolutional
representations on ICLL tasks. Next, we provide evidence that their ability to
do so relies on specialized "n-gram heads" (higher-order variants of induction
heads) that compute input-conditional next-token distributions. Finally, we
show that hard-wiring these heads into neural models improves performance not
just on ICLL, but natural language modeling -- improving the perplexity of
340M-parameter models by up to 1.14 points (6.7%) on the SlimPajama dataset.
|
[
{
"created": "Tue, 23 Jan 2024 18:59:21 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 18:59:34 GMT",
"version": "v2"
}
] |
2024-01-31
|
[
[
"Akyürek",
"Ekin",
""
],
[
"Wang",
"Bailin",
""
],
[
"Kim",
"Yoon",
""
],
[
"Andreas",
"Jacob",
""
]
] |
Large-scale neural language models exhibit a remarkable capacity for in-context learning (ICL): they can infer novel functions from datasets provided as input. Most of our current understanding of when and how ICL arises comes from LMs trained on extremely simple learning problems like linear regression and associative recall. There remains a significant gap between these model problems and the "real" ICL exhibited by LMs trained on large text corpora, which involves not just retrieval and function approximation but free-form generation of language and other structured outputs. In this paper, we study ICL through the lens of a new family of model problems we term in context language learning (ICLL). In ICLL, LMs are presented with a set of strings from a formal language, and must generate additional strings from the same language. We focus on in-context learning of regular languages generated by random finite automata. We evaluate a diverse set of neural sequence models (including several RNNs, Transformers, and state-space model variants) on regular ICLL tasks, aiming to answer three questions: (1) Which model classes are empirically capable of ICLL? (2) What algorithmic solutions do successful models implement to perform ICLL? (3) What architectural changes can improve ICLL in less performant models? We first show that Transformers significantly outperform neural sequence models with recurrent or convolutional representations on ICLL tasks. Next, we provide evidence that their ability to do so relies on specialized "n-gram heads" (higher-order variants of induction heads) that compute input-conditional next-token distributions. Finally, we show that hard-wiring these heads into neural models improves performance not just on ICLL, but natural language modeling -- improving the perplexity of 340M-parameter models by up to 1.14 points (6.7%) on the SlimPajama dataset.
|
2111.06959
|
Rakesh John Amala Arokia Nathan
|
Rakesh John Amala Arokia Nathan, Indrajit Kurmi, David C. Schedl and
Oliver Bimber
|
Through-Foliage Tracking with Airborne Optical Sectioning
|
9 Pages, 9 Figures, 1 Table and supplementary videos and material
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Detecting and tracking moving targets through foliage is difficult, and for
many cases even impossible in regular aerial images and videos. We present an
initial light-weight and drone-operated 1D camera array that supports parallel
synthetic aperture aerial imaging. Our main finding is that color anomaly
detection benefits significantly from image integration when compared to
conventional raw images or video frames (on average 97% vs. 42% in precision in
our field experiments). We demonstrate, that these two contributions can lead
to the detection and tracking of moving people through densely occluding
forest.
|
[
{
"created": "Fri, 12 Nov 2021 21:54:25 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Nov 2021 08:51:47 GMT",
"version": "v2"
}
] |
2021-12-01
|
[
[
"Nathan",
"Rakesh John Amala Arokia",
""
],
[
"Kurmi",
"Indrajit",
""
],
[
"Schedl",
"David C.",
""
],
[
"Bimber",
"Oliver",
""
]
] |
Detecting and tracking moving targets through foliage is difficult, and for many cases even impossible in regular aerial images and videos. We present an initial light-weight and drone-operated 1D camera array that supports parallel synthetic aperture aerial imaging. Our main finding is that color anomaly detection benefits significantly from image integration when compared to conventional raw images or video frames (on average 97% vs. 42% in precision in our field experiments). We demonstrate, that these two contributions can lead to the detection and tracking of moving people through densely occluding forest.
|
2104.07658
|
Weidi Xie
|
Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, Weidi Xie
|
Self-supervised Video Object Segmentation by Motion Grouping
|
Best Paper in CVPR2021 RVSU Workshop. Accepted by ICCV
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Animals have evolved highly functional visual systems to understand motion,
assisting perception even under complex environments. In this paper, we work
towards developing a computer vision system able to segment objects by
exploiting motion cues, i.e. motion segmentation. We make the following
contributions: First, we introduce a simple variant of the Transformer to
segment optical flow frames into primary objects and the background. Second, we
train the architecture in a self-supervised manner, i.e. without using any
manual annotations. Third, we analyze several critical components of our method
and conduct thorough ablation studies to validate their necessity. Fourth, we
evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2,
and FBMS59). Despite using only optical flow as input, our approach achieves
superior or comparable results to previous state-of-the-art self-supervised
methods, while being an order of magnitude faster. We additionally evaluate on
a challenging camouflage dataset (MoCA), significantly outperforming the other
self-supervised approaches, and comparing favourably to the top supervised
approach, highlighting the importance of motion cues, and the potential bias
towards visual appearance in existing video segmentation models.
|
[
{
"created": "Thu, 15 Apr 2021 17:59:32 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Aug 2021 09:56:30 GMT",
"version": "v2"
}
] |
2021-08-12
|
[
[
"Yang",
"Charig",
""
],
[
"Lamdouar",
"Hala",
""
],
[
"Lu",
"Erika",
""
],
[
"Zisserman",
"Andrew",
""
],
[
"Xie",
"Weidi",
""
]
] |
Animals have evolved highly functional visual systems to understand motion, assisting perception even under complex environments. In this paper, we work towards developing a computer vision system able to segment objects by exploiting motion cues, i.e. motion segmentation. We make the following contributions: First, we introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background. Second, we train the architecture in a self-supervised manner, i.e. without using any manual annotations. Third, we analyze several critical components of our method and conduct thorough ablation studies to validate their necessity. Fourth, we evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59). Despite using only optical flow as input, our approach achieves superior or comparable results to previous state-of-the-art self-supervised methods, while being an order of magnitude faster. We additionally evaluate on a challenging camouflage dataset (MoCA), significantly outperforming the other self-supervised approaches, and comparing favourably to the top supervised approach, highlighting the importance of motion cues, and the potential bias towards visual appearance in existing video segmentation models.
|
1009.4563
|
Ayyasamy S
|
S.Ayyasamy and S.N. Sivanandam
|
A Cluster Based Replication Architecture for Load Balancing in
Peer-to-Peer Content Distribution
|
15 pages, 8 figures
|
International Journal of Computer Networks & Communications
(IJCNC) Vol.2, No.5, September 2010
|
10.5121/ijcnc.2010.2510
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In P2P systems, large volumes of data are declustered naturally across a
large number of peers. But it is very difficult to control the initial data
distribution because every user has the freedom to share any data with other
users. The system scalability can be improved by distributing the load across
multiple servers which is proposed by replication. The large scale content
distribution systems were improved broadly using the replication techniques.
The demanded contents can be brought closer to the clients by multiplying the
source of information geographically, which in turn reduce both the access
latency and the network traffic. In addition to this, due to the intrinsic
dynamism of the P2P environment, static data distribution cannot be expected to
guarantee good load balancing. If the hot peers become bottleneck, it leads to
increased user response time and significant performance degradation of the
system. Hence an effective load balancing mechanism is necessary in such cases
and it can be attained efficiently by intelligent data replication. In this
paper, we propose a cluster based replication architecture for load-balancing
in peer-to-peer content distribution systems. In addition to an intelligent
replica placement technique, it also consists of an effective load balancing
technique. In the intelligent replica placement technique, peers are grouped
into strong and weak clusters based on their weight vector which comprises
available capacity, CPU speed, access latency and memory size. In order to
achieve complete load balancing across the system, an intracluster and
inter-cluster load balancing algorithms are proposed. We are able to show that
our proposed architecture attains less latency and better throughput with
reduced bandwidth usage, through the simulation results.
|
[
{
"created": "Thu, 23 Sep 2010 10:24:53 GMT",
"version": "v1"
}
] |
2010-09-24
|
[
[
"Ayyasamy",
"S.",
""
],
[
"Sivanandam",
"S. N.",
""
]
] |
In P2P systems, large volumes of data are declustered naturally across a large number of peers. But it is very difficult to control the initial data distribution because every user has the freedom to share any data with other users. The system scalability can be improved by distributing the load across multiple servers which is proposed by replication. The large scale content distribution systems were improved broadly using the replication techniques. The demanded contents can be brought closer to the clients by multiplying the source of information geographically, which in turn reduce both the access latency and the network traffic. In addition to this, due to the intrinsic dynamism of the P2P environment, static data distribution cannot be expected to guarantee good load balancing. If the hot peers become bottleneck, it leads to increased user response time and significant performance degradation of the system. Hence an effective load balancing mechanism is necessary in such cases and it can be attained efficiently by intelligent data replication. In this paper, we propose a cluster based replication architecture for load-balancing in peer-to-peer content distribution systems. In addition to an intelligent replica placement technique, it also consists of an effective load balancing technique. In the intelligent replica placement technique, peers are grouped into strong and weak clusters based on their weight vector which comprises available capacity, CPU speed, access latency and memory size. In order to achieve complete load balancing across the system, an intracluster and inter-cluster load balancing algorithms are proposed. We are able to show that our proposed architecture attains less latency and better throughput with reduced bandwidth usage, through the simulation results.
|
cs/0611065
|
Milton Chowdhury
|
M. M. Chowdhury
|
On the security of new key exchange protocols based on the triple
decomposition problem
|
This figures are given in the other version
| null | null | null |
cs.CR
| null |
We show that two new key exchange protocols with security based on the triple
DP may have security based on the MSCSP.
|
[
{
"created": "Tue, 14 Nov 2006 23:54:56 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2006 10:50:57 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Dec 2007 17:43:37 GMT",
"version": "v3"
}
] |
2007-12-06
|
[
[
"Chowdhury",
"M. M.",
""
]
] |
We show that two new key exchange protocols with security based on the triple DP may have security based on the MSCSP.
|
2212.03467
|
Christopher Jerrett
|
Yue Han, Christopher Jerrett, Elliot Anshelevich
|
Optimizing Multiple Simultaneous Objectives for Voting and Facility
Location
|
To be published in the Proceedings of 37th Conference on Artificial
Intelligence (AAAI 2023)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the classic facility location setting, where we are given $n$
clients and $m$ possible facility locations in some arbitrary metric space, and
want to choose a location to build a facility. The exact same setting also
arises in spatial social choice, where voters are the clients and the goal is
to choose a candidate or outcome, with the distance from a voter to an outcome
representing the cost of this outcome for the voter (e.g., based on their
ideological differences). Unlike most previous work, we do not focus on a
single objective to optimize (e.g., the total distance from clients to the
facility, or the maximum distance, etc.), but instead attempt to optimize
several different objectives simultaneously. More specifically, we consider the
$l$-centrum family of objectives, which includes the total distance, max
distance, and many others. We present tight bounds on how well any pair of such
objectives (e.g., max and sum) can be simultaneously approximated compared to
their optimum outcomes. In particular, we show that for any such pair of
objectives, it is always possible to choose an outcome which simultaneously
approximates both objectives within a factor of $1+\sqrt{2}$, and give a
precise characterization of how this factor improves as the two objectives
being optimized become more similar. For $q>2$ different centrum objectives, we
show that it is always possible to approximate all $q$ of these objectives
within a small constant, and that this constant approaches 3 as $q\rightarrow
\infty$. Our results show that when optimizing only a few simultaneous
objectives, it is always possible to form an outcome which is a significantly
better than 3 approximation for all of these objectives.
|
[
{
"created": "Wed, 7 Dec 2022 05:12:40 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Dec 2022 18:01:15 GMT",
"version": "v2"
}
] |
2022-12-13
|
[
[
"Han",
"Yue",
""
],
[
"Jerrett",
"Christopher",
""
],
[
"Anshelevich",
"Elliot",
""
]
] |
We study the classic facility location setting, where we are given $n$ clients and $m$ possible facility locations in some arbitrary metric space, and want to choose a location to build a facility. The exact same setting also arises in spatial social choice, where voters are the clients and the goal is to choose a candidate or outcome, with the distance from a voter to an outcome representing the cost of this outcome for the voter (e.g., based on their ideological differences). Unlike most previous work, we do not focus on a single objective to optimize (e.g., the total distance from clients to the facility, or the maximum distance, etc.), but instead attempt to optimize several different objectives simultaneously. More specifically, we consider the $l$-centrum family of objectives, which includes the total distance, max distance, and many others. We present tight bounds on how well any pair of such objectives (e.g., max and sum) can be simultaneously approximated compared to their optimum outcomes. In particular, we show that for any such pair of objectives, it is always possible to choose an outcome which simultaneously approximates both objectives within a factor of $1+\sqrt{2}$, and give a precise characterization of how this factor improves as the two objectives being optimized become more similar. For $q>2$ different centrum objectives, we show that it is always possible to approximate all $q$ of these objectives within a small constant, and that this constant approaches 3 as $q\rightarrow \infty$. Our results show that when optimizing only a few simultaneous objectives, it is always possible to form an outcome which is a significantly better than 3 approximation for all of these objectives.
|
1504.04428
|
Bo Zhou
|
Bo Zhou, Ying Cui and Meixia Tao
|
Optimal Dynamic Multicast Scheduling for Cache-Enabled Content-Centric
Wireless Networks
|
17 double-column pages; Shorter version appears in ISIT 2015
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Caching and multicasting at base stations are two promising approaches to
support massive content delivery over wireless networks. However, existing
scheduling designs do not make full use of the advantages of the two
approaches. In this paper, we consider the optimal dynamic multicast scheduling
to jointly minimize the average delay, power, and fetching costs for
cache-enabled content-centric wireless networks. We formulate this stochastic
optimization problem as an infinite horizon average cost Markov decision
process (MDP). It is well-known to be a difficult problem due to the curse of
dimensionality, and there generally only exist numerical solutions. By using
relative value iteration algorithm and the special structures of the request
queue dynamics, we analyze the properties of the value function and the
state-action cost function of the MDP for both the uniform and nonuniform
channel cases. Based on these properties, we show that the optimal policy,
which is adaptive to the request queue state, has a switch structure in the
uniform case and a partial switch structure in the nonuniform case. Moreover,
in the uniform case with two contents, we show that the switch curve is
monotonically non-decreasing. Then, by exploiting these structural properties
of the optimal policy, we propose two low-complexity optimal algorithms.
Motivated by the switch structures of the optimal policy, to further reduce the
complexity, we also propose a low-complexity suboptimal policy, which possesses
similar structural properties to the optimal policy, and develop a
low-complexity algorithm to compute this policy.
|
[
{
"created": "Fri, 17 Apr 2015 02:52:45 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Apr 2015 10:55:13 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Feb 2016 01:15:52 GMT",
"version": "v3"
}
] |
2016-02-25
|
[
[
"Zhou",
"Bo",
""
],
[
"Cui",
"Ying",
""
],
[
"Tao",
"Meixia",
""
]
] |
Caching and multicasting at base stations are two promising approaches to support massive content delivery over wireless networks. However, existing scheduling designs do not make full use of the advantages of the two approaches. In this paper, we consider the optimal dynamic multicast scheduling to jointly minimize the average delay, power, and fetching costs for cache-enabled content-centric wireless networks. We formulate this stochastic optimization problem as an infinite horizon average cost Markov decision process (MDP). It is well-known to be a difficult problem due to the curse of dimensionality, and there generally only exist numerical solutions. By using relative value iteration algorithm and the special structures of the request queue dynamics, we analyze the properties of the value function and the state-action cost function of the MDP for both the uniform and nonuniform channel cases. Based on these properties, we show that the optimal policy, which is adaptive to the request queue state, has a switch structure in the uniform case and a partial switch structure in the nonuniform case. Moreover, in the uniform case with two contents, we show that the switch curve is monotonically non-decreasing. Then, by exploiting these structural properties of the optimal policy, we propose two low-complexity optimal algorithms. Motivated by the switch structures of the optimal policy, to further reduce the complexity, we also propose a low-complexity suboptimal policy, which possesses similar structural properties to the optimal policy, and develop a low-complexity algorithm to compute this policy.
|
2306.01762
|
Yujian Li
|
Kai Wu, Yujian Betterest Li, Jian Lou, Xiaoyu Zhang, Handing Wang,
Jing Liu
|
Pre-trained transformer for adversarial purification
| null | null | null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With more and more deep neural networks being deployed as various daily
services, their reliability is essential. It is frightening that deep neural
networks are vulnerable and sensitive to adversarial attacks, the most common
one of which for the services is evasion-based. Recent works usually strengthen
the robustness by adversarial training or leveraging the knowledge of an amount
of clean data. However, retraining and redeploying the model need a large
computational budget, leading to heavy losses to the online service. In
addition, when training, it is likely that only limited adversarial examples
are available for the service provider, while much clean data may not be
accessible. Based on the analysis on the defense for deployed models, we find
that how to rapidly defend against a certain attack for a frozen original
service model with limitations of few clean and adversarial examples, which is
named as RaPiD (Rapid Plug-in Defender), is really important. Motivated by the
generalization and the universal computation ability of pre-trained transformer
models, we come up with a new defender method, CeTaD, which stands for
Considering Pretrained Transformers as Defenders. In particular, we evaluate
the effectiveness and the transferability of CeTaD in the case of one-shot
adversarial examples and explore the impact of different parts of CeTaD as well
as training data conditions. CeTaD is flexible for different differentiable
service models, and suitable for various types of attacks.
|
[
{
"created": "Sat, 27 May 2023 06:00:51 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Aug 2023 04:53:15 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Sep 2023 04:21:57 GMT",
"version": "v3"
}
] |
2023-09-27
|
[
[
"Wu",
"Kai",
""
],
[
"Li",
"Yujian Betterest",
""
],
[
"Lou",
"Jian",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Wang",
"Handing",
""
],
[
"Liu",
"Jing",
""
]
] |
With more and more deep neural networks being deployed as various daily services, their reliability is essential. It is frightening that deep neural networks are vulnerable and sensitive to adversarial attacks, the most common one of which for the services is evasion-based. Recent works usually strengthen the robustness by adversarial training or leveraging the knowledge of an amount of clean data. However, retraining and redeploying the model need a large computational budget, leading to heavy losses to the online service. In addition, when training, it is likely that only limited adversarial examples are available for the service provider, while much clean data may not be accessible. Based on the analysis on the defense for deployed models, we find that how to rapidly defend against a certain attack for a frozen original service model with limitations of few clean and adversarial examples, which is named as RaPiD (Rapid Plug-in Defender), is really important. Motivated by the generalization and the universal computation ability of pre-trained transformer models, we come up with a new defender method, CeTaD, which stands for Considering Pretrained Transformers as Defenders. In particular, we evaluate the effectiveness and the transferability of CeTaD in the case of one-shot adversarial examples and explore the impact of different parts of CeTaD as well as training data conditions. CeTaD is flexible for different differentiable service models, and suitable for various types of attacks.
|
2112.01523
|
Benjamin Attal
|
Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf,
Changil Kim
|
Learning Neural Light Fields with Ray-Space Embedding Networks
|
CVPR 2022 camera ready revision. Major changes include: 1. Additional
comparison to NeX on Stanford, RealFF, Shiny datasets 2. Experiment on 360
degree lego bulldozer scene in the appendix, using Pluecker parameterization
3. Moving student-teacher results to the appendix 4. Clarity edits -- in
particular, making it clear that our Stanford evaluation *does not* use
subdivision
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields (NeRFs) produce state-of-the-art view synthesis
results. However, they are slow to render, requiring hundreds of network
evaluations per pixel to approximate a volume rendering integral. Baking NeRFs
into explicit data structures enables efficient rendering, but results in a
large increase in memory footprint and, in many cases, a quality reduction. In
this paper, we propose a novel neural light field representation that, in
contrast, is compact and directly predicts integrated radiance along rays. Our
method supports rendering with a single network evaluation per pixel for small
baseline light field datasets and can also be applied to larger baselines with
only a few evaluations per pixel. At the core of our approach is a ray-space
embedding network that maps the 4D ray-space manifold into an intermediate,
interpolable latent space. Our method achieves state-of-the-art quality on
dense forward-facing datasets such as the Stanford Light Field dataset. In
addition, for forward-facing scenes with sparser inputs we achieve results that
are competitive with NeRF-based approaches in terms of quality while providing
a better speed/quality/memory trade-off with far fewer network evaluations.
|
[
{
"created": "Thu, 2 Dec 2021 18:59:51 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Dec 2021 17:45:14 GMT",
"version": "v2"
},
{
"created": "Tue, 10 May 2022 17:02:28 GMT",
"version": "v3"
}
] |
2022-05-11
|
[
[
"Attal",
"Benjamin",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Zollhoefer",
"Michael",
""
],
[
"Kopf",
"Johannes",
""
],
[
"Kim",
"Changil",
""
]
] |
Neural radiance fields (NeRFs) produce state-of-the-art view synthesis results. However, they are slow to render, requiring hundreds of network evaluations per pixel to approximate a volume rendering integral. Baking NeRFs into explicit data structures enables efficient rendering, but results in a large increase in memory footprint and, in many cases, a quality reduction. In this paper, we propose a novel neural light field representation that, in contrast, is compact and directly predicts integrated radiance along rays. Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel. At the core of our approach is a ray-space embedding network that maps the 4D ray-space manifold into an intermediate, interpolable latent space. Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset. In addition, for forward-facing scenes with sparser inputs we achieve results that are competitive with NeRF-based approaches in terms of quality while providing a better speed/quality/memory trade-off with far fewer network evaluations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.