id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.09925 | Daniel Gibert | Daniel Gibert, Jordi Planes, Quan Le, Giulio Zizzo | Query-Free Evasion Attacks Against Machine Learning-Based Malware
Detectors with Generative Adversarial Networks | null | 2023 IEEE European Symposium on Security and Privacy Workshops | 10.1109/EuroSPW59978.2023.00052 | null | cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Malware detectors based on machine learning (ML) have been shown to be
susceptible to adversarial malware examples. However, current methods to
generate adversarial malware examples still have their limits. They either rely
on detailed model information (gradient-based attacks), or on detailed outputs
of the model - such as class probabilities (score-based attacks), neither of
which are available in real-world scenarios. Alternatively, adversarial
examples might be crafted using only the label assigned by the detector
(label-based attack) to train a substitute network or an agent using
reinforcement learning. Nonetheless, label-based attacks might require querying
a black-box system from a small number to thousands of times, depending on the
approach, which might not be feasible against malware detectors. This work
presents a novel query-free approach to craft adversarial malware examples to
evade ML-based malware detectors. To this end, we have devised a GAN-based
framework to generate adversarial malware examples that look similar to benign
executables in the feature space. To demonstrate the suitability of our
approach we have applied the GAN-based attack to three common types of features
usually employed by static ML-based malware detectors: (1) Byte histogram
features, (2) API-based features, and (3) String-based features. Results show
that our model-agnostic approach performs on par with MalGAN, while generating
more realistic adversarial malware examples without requiring any query to the
malware detectors. Furthermore, we have tested the generated adversarial
examples against state-of-the-art multimodal and deep learning malware
detectors, showing a decrease in detection performance, as well as a decrease
in the average number of detections by the anti-malware engines in VirusTotal.
| [
{
"created": "Fri, 16 Jun 2023 15:48:40 GMT",
"version": "v1"
}
] | 2023-08-22 | [
[
"Gibert",
"Daniel",
""
],
[
"Planes",
"Jordi",
""
],
[
"Le",
"Quan",
""
],
[
"Zizzo",
"Giulio",
""
]
] | Malware detectors based on machine learning (ML) have been shown to be susceptible to adversarial malware examples. However, current methods to generate adversarial malware examples still have their limits. They either rely on detailed model information (gradient-based attacks), or on detailed outputs of the model - such as class probabilities (score-based attacks), neither of which are available in real-world scenarios. Alternatively, adversarial examples might be crafted using only the label assigned by the detector (label-based attack) to train a substitute network or an agent using reinforcement learning. Nonetheless, label-based attacks might require querying a black-box system from a small number to thousands of times, depending on the approach, which might not be feasible against malware detectors. This work presents a novel query-free approach to craft adversarial malware examples to evade ML-based malware detectors. To this end, we have devised a GAN-based framework to generate adversarial malware examples that look similar to benign executables in the feature space. To demonstrate the suitability of our approach we have applied the GAN-based attack to three common types of features usually employed by static ML-based malware detectors: (1) Byte histogram features, (2) API-based features, and (3) String-based features. Results show that our model-agnostic approach performs on par with MalGAN, while generating more realistic adversarial malware examples without requiring any query to the malware detectors. Furthermore, we have tested the generated adversarial examples against state-of-the-art multimodal and deep learning malware detectors, showing a decrease in detection performance, as well as a decrease in the average number of detections by the anti-malware engines in VirusTotal. |
2403.16178 | Manisha Natarajan | Manisha Natarajan, Chunyue Xue, Sanne van Waveren, Karen Feigh,
Matthew Gombolay | Mixed-Initiative Human-Robot Teaming under Suboptimality with Online
Bayesian Adaptation | 8 pages, 4 pages for supplementary | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For effective human-agent teaming, robots and other artificial intelligence
(AI) agents must infer their human partner's abilities and behavioral response
patterns and adapt accordingly. Most prior works make the unrealistic
assumption that one or more teammates can act near-optimally. In real-world
collaboration, humans and autonomous agents can be suboptimal, especially when
each only has partial domain knowledge. In this work, we develop computational
modeling and optimization techniques for enhancing the performance of
suboptimal human-agent teams, where the human and the agent have asymmetric
capabilities and act suboptimally due to incomplete environmental knowledge. We
adopt an online Bayesian approach that enables a robot to infer people's
willingness to comply with its assistance in a sequential decision-making game.
Our user studies show that user preferences and team performance indeed vary
with robot intervention styles, and our approach for mixed-initiative
collaborations enhances objective team performance ($p<.001$) and subjective
measures, such as user's trust ($p<.001$) and perceived likeability of the
robot ($p<.001$).
| [
{
"created": "Sun, 24 Mar 2024 14:38:18 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Natarajan",
"Manisha",
""
],
[
"Xue",
"Chunyue",
""
],
[
"van Waveren",
"Sanne",
""
],
[
"Feigh",
"Karen",
""
],
[
"Gombolay",
"Matthew",
""
]
] | For effective human-agent teaming, robots and other artificial intelligence (AI) agents must infer their human partner's abilities and behavioral response patterns and adapt accordingly. Most prior works make the unrealistic assumption that one or more teammates can act near-optimally. In real-world collaboration, humans and autonomous agents can be suboptimal, especially when each only has partial domain knowledge. In this work, we develop computational modeling and optimization techniques for enhancing the performance of suboptimal human-agent teams, where the human and the agent have asymmetric capabilities and act suboptimally due to incomplete environmental knowledge. We adopt an online Bayesian approach that enables a robot to infer people's willingness to comply with its assistance in a sequential decision-making game. Our user studies show that user preferences and team performance indeed vary with robot intervention styles, and our approach for mixed-initiative collaborations enhances objective team performance ($p<.001$) and subjective measures, such as user's trust ($p<.001$) and perceived likeability of the robot ($p<.001$). |
1912.07744 | Siyuan Huang | Siyuan Huang, Yixin Chen, Tao Yuan, Siyuan Qi, Yixin Zhu, Song-Chun
Zhu | PerspectiveNet: 3D Object Detection from a Single RGB Image via
Perspective Points | NeurIPS 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting 3D objects from a single RGB image is intrinsically ambiguous, thus
requiring appropriate prior knowledge and intermediate representations as
constraints to reduce the uncertainties and improve the consistencies between
the 2D image plane and the 3D world coordinate. To address this challenge, we
propose to adopt perspective points as a new intermediate representation for 3D
object detection, defined as the 2D projections of local Manhattan 3D keypoints
to locate an object; these perspective points satisfy geometric constraints
imposed by the perspective projection. We further devise PerspectiveNet, an
end-to-end trainable model that simultaneously detects the 2D bounding box, 2D
perspective points, and 3D object bounding box for each object from a single
RGB image. PerspectiveNet yields three unique advantages: (i) 3D object
bounding boxes are estimated based on perspective points, bridging the gap
between 2D and 3D bounding boxes without the need of category-specific 3D shape
priors. (ii) It predicts the perspective points by a template-based method, and
a perspective loss is formulated to maintain the perspective constraints. (iii)
It maintains the consistency between the 2D perspective points and 3D bounding
boxes via a differentiable projective function. Experiments on SUN RGB-D
dataset show that the proposed method significantly outperforms existing
RGB-based approaches for 3D object detection.
| [
{
"created": "Mon, 16 Dec 2019 22:58:53 GMT",
"version": "v1"
}
] | 2019-12-18 | [
[
"Huang",
"Siyuan",
""
],
[
"Chen",
"Yixin",
""
],
[
"Yuan",
"Tao",
""
],
[
"Qi",
"Siyuan",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | Detecting 3D objects from a single RGB image is intrinsically ambiguous, thus requiring appropriate prior knowledge and intermediate representations as constraints to reduce the uncertainties and improve the consistencies between the 2D image plane and the 3D world coordinate. To address this challenge, we propose to adopt perspective points as a new intermediate representation for 3D object detection, defined as the 2D projections of local Manhattan 3D keypoints to locate an object; these perspective points satisfy geometric constraints imposed by the perspective projection. We further devise PerspectiveNet, an end-to-end trainable model that simultaneously detects the 2D bounding box, 2D perspective points, and 3D object bounding box for each object from a single RGB image. PerspectiveNet yields three unique advantages: (i) 3D object bounding boxes are estimated based on perspective points, bridging the gap between 2D and 3D bounding boxes without the need of category-specific 3D shape priors. (ii) It predicts the perspective points by a template-based method, and a perspective loss is formulated to maintain the perspective constraints. (iii) It maintains the consistency between the 2D perspective points and 3D bounding boxes via a differentiable projective function. Experiments on SUN RGB-D dataset show that the proposed method significantly outperforms existing RGB-based approaches for 3D object detection. |
2405.09309 | Bikash Kumar Dey | Abhishek Sarkar and Bikash Kumar Dey | Identification via Permutation Channels | 9 pages. Extended and generalized version of submission to ITW 2024 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study message identification over a $q$-ary uniform permutation channel,
where the transmitted vector is permuted by a permutation chosen uniformly at
random. For discrete memoryless channels (DMCs), the number of identifiable
messages grows doubly exponentially. Identification capacity, the maximum
second-order exponent, is known to be the same as the Shannon capacity of the
DMC. Permutation channels support reliable communication of only polynomially
many messages. A simple achievability result shows that message sizes growing
as $2^{c_nn^{q-1}}$ are identifiable for any $c_n\rightarrow 0$. We prove two
converse results. A ``soft'' converse shows that for any $R>0$, there is no
sequence of identification codes with message size growing as $2^{Rn^{q-1}}$
with a power-law decay ($n^{-\mu}$) of the error probability. We also prove a
``strong" converse showing that for any sequence of identification codes with
message size $2^{Rn^{q-1}\log n}$ ($R>0$), the sum of type I and type II error
probabilities approaches at least $1$ as $n\rightarrow \infty$. To prove the
soft converse, we use a sequence of steps to construct a new identification
code with a simpler structure which relates to a set system, and then use a
lower bound on the normalized maximum pairwise intersection of a set system. To
prove the strong converse, we use results on approximation of distributions.
| [
{
"created": "Wed, 15 May 2024 13:07:35 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jun 2024 11:58:02 GMT",
"version": "v2"
}
] | 2024-06-05 | [
[
"Sarkar",
"Abhishek",
""
],
[
"Dey",
"Bikash Kumar",
""
]
] | We study message identification over a $q$-ary uniform permutation channel, where the transmitted vector is permuted by a permutation chosen uniformly at random. For discrete memoryless channels (DMCs), the number of identifiable messages grows doubly exponentially. Identification capacity, the maximum second-order exponent, is known to be the same as the Shannon capacity of the DMC. Permutation channels support reliable communication of only polynomially many messages. A simple achievability result shows that message sizes growing as $2^{c_nn^{q-1}}$ are identifiable for any $c_n\rightarrow 0$. We prove two converse results. A ``soft'' converse shows that for any $R>0$, there is no sequence of identification codes with message size growing as $2^{Rn^{q-1}}$ with a power-law decay ($n^{-\mu}$) of the error probability. We also prove a ``strong" converse showing that for any sequence of identification codes with message size $2^{Rn^{q-1}\log n}$ ($R>0$), the sum of type I and type II error probabilities approaches at least $1$ as $n\rightarrow \infty$. To prove the soft converse, we use a sequence of steps to construct a new identification code with a simpler structure which relates to a set system, and then use a lower bound on the normalized maximum pairwise intersection of a set system. To prove the strong converse, we use results on approximation of distributions. |
2008.08311 | Seokwoo Jung | Seokwoo Jung, Sungha Choi, Mohammad Azam Khan, Jaegul Choo | Towards Lightweight Lane Detection by Optimizing Spatial Embedding | Preprint - work in progress | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of lane detection methods depend on a proposal-free instance
segmentation because of its adaptability to flexible object shape, occlusion,
and real-time application. This paper addresses the problem that pixel
embedding in proposal-free instance segmentation based lane detection is
difficult to optimize. A translation invariance of convolution, which is one of
the supposed strengths, causes challenges in optimizing pixel embedding. In
this work, we propose a lane detection method based on proposal-free instance
segmentation, directly optimizing spatial embedding of pixels using image
coordinate. Our proposed method allows the post-processing step for center
localization and optimizes clustering in an end-to-end manner. The proposed
method enables real-time lane detection through the simplicity of
post-processing and the adoption of a lightweight backbone. Our proposed method
demonstrates competitive performance on public lane detection datasets.
| [
{
"created": "Wed, 19 Aug 2020 07:37:04 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Aug 2020 06:45:20 GMT",
"version": "v2"
}
] | 2020-08-28 | [
[
"Jung",
"Seokwoo",
""
],
[
"Choi",
"Sungha",
""
],
[
"Khan",
"Mohammad Azam",
""
],
[
"Choo",
"Jaegul",
""
]
] | A number of lane detection methods depend on a proposal-free instance segmentation because of its adaptability to flexible object shape, occlusion, and real-time application. This paper addresses the problem that pixel embedding in proposal-free instance segmentation based lane detection is difficult to optimize. A translation invariance of convolution, which is one of the supposed strengths, causes challenges in optimizing pixel embedding. In this work, we propose a lane detection method based on proposal-free instance segmentation, directly optimizing spatial embedding of pixels using image coordinate. Our proposed method allows the post-processing step for center localization and optimizes clustering in an end-to-end manner. The proposed method enables real-time lane detection through the simplicity of post-processing and the adoption of a lightweight backbone. Our proposed method demonstrates competitive performance on public lane detection datasets. |
1311.1762 | Tsvi Kopelowitz | Richard Cole, Tsvi Kopelowitz, Moshe Lewenstein | Suffix Trays and Suffix Trists: Structures for Faster Text Indexing | Results from this paper have appeared as an extended abstract in
ICALP 2006 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suffix trees and suffix arrays are two of the most widely used data
structures for text indexing. Each uses linear space and can be constructed in
linear time for polynomially sized alphabets. However, when it comes to
answering queries with worst-case deterministic time bounds, the prior does so
in $O(m\log|\Sigma|)$ time, where $m$ is the query size, $|\Sigma|$ is the
alphabet size, and the latter does so in $O(m+\log n)$ time, where $n$ is the
text size. If one wants to output all appearances of the query, an additive
cost of $O(occ)$ time is sufficient, where $occ$ is the size of the output.
We propose a novel way of combining the two into, what we call, a {\em suffix
tray}. The space and construction time remain linear and the query time
improves to $O(m+\log|\Sigma|)$ for integer alphabets from a linear range, i.e.
$\Sigma \subset \{1,\cdots, cn\}$, for an arbitrary constant $c$. The
construction and query are deterministic. Here also an additive $O(occ)$ time
is sufficient if one desires to output all appearances of the query.
We also consider the online version of indexing, where the text arrives
online, one character at a time, and indexing queries are answered in tandem.
In this variant we create a cross between a suffix tree and a suffix list (a
dynamic variant of suffix array) to be called a {\em suffix trist}; it supports
queries in $O(m+\log|\Sigma|)$ time. The suffix trist also uses linear space.
Furthermore, if there exists an online construction for a linear-space suffix
tree such that the cost of adding a character is worst-case deterministic
$f(n,|\Sigma|)$ ($n$ is the size of the current text), then one can further
update the suffix trist in $O(f(n,|\Sigma|)+\log |\Sigma|)$ time. The best
currently known worst-case deterministic bound for $f(n,|\Sigma|)$ is $O(\log
n)$ time.
| [
{
"created": "Thu, 7 Nov 2013 17:44:02 GMT",
"version": "v1"
}
] | 2013-11-08 | [
[
"Cole",
"Richard",
""
],
[
"Kopelowitz",
"Tsvi",
""
],
[
"Lewenstein",
"Moshe",
""
]
] | Suffix trees and suffix arrays are two of the most widely used data structures for text indexing. Each uses linear space and can be constructed in linear time for polynomially sized alphabets. However, when it comes to answering queries with worst-case deterministic time bounds, the prior does so in $O(m\log|\Sigma|)$ time, where $m$ is the query size, $|\Sigma|$ is the alphabet size, and the latter does so in $O(m+\log n)$ time, where $n$ is the text size. If one wants to output all appearances of the query, an additive cost of $O(occ)$ time is sufficient, where $occ$ is the size of the output. We propose a novel way of combining the two into, what we call, a {\em suffix tray}. The space and construction time remain linear and the query time improves to $O(m+\log|\Sigma|)$ for integer alphabets from a linear range, i.e. $\Sigma \subset \{1,\cdots, cn\}$, for an arbitrary constant $c$. The construction and query are deterministic. Here also an additive $O(occ)$ time is sufficient if one desires to output all appearances of the query. We also consider the online version of indexing, where the text arrives online, one character at a time, and indexing queries are answered in tandem. In this variant we create a cross between a suffix tree and a suffix list (a dynamic variant of suffix array) to be called a {\em suffix trist}; it supports queries in $O(m+\log|\Sigma|)$ time. The suffix trist also uses linear space. Furthermore, if there exists an online construction for a linear-space suffix tree such that the cost of adding a character is worst-case deterministic $f(n,|\Sigma|)$ ($n$ is the size of the current text), then one can further update the suffix trist in $O(f(n,|\Sigma|)+\log |\Sigma|)$ time. The best currently known worst-case deterministic bound for $f(n,|\Sigma|)$ is $O(\log n)$ time. |
1006.2805 | Jenny Blight | Saeed Tavakoli and Amir Banookh | Robust PI Control Design Using Particle Swarm Optimization | Submitted to Journal of Computer Science and Engineering, see
http://sites.google.com/site/jcseuk/volume-1-issue-1-may-2010 | Journal of Computer Science and Engineering, Volume 1, Issue 1,
p36-41, May 2010 | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a set of robust PI tuning formulae for a first order plus
dead time process using particle swarm optimization. Also, tuning formulae for
an integrating process with dead time, which is a special case of a first order
plus dead time process, is given. The design problem considers three essential
requirements of control problems, namely load disturbance rejection, setpoint
regulation and robustness of closed-loop system against model uncertainties.
The primary design goal is to optimize load disturbance rejection. Robustness
is guaranteed by requiring that the maximum sensitivity is less than or equal
to a specified value. In the first step, PI controller parameters are
determined such that the IAE criterion to a load disturbance step is minimized
and the robustness constraint on maximum sensitivity is satisfied. Using a
structure with two degrees of freedom which introduces an extra parameter, the
setpoint weight, good setpoint regulation is achieved in the second step. The
main advantage of the proposed method is its simplicity. Once the equivalent
first order plus dead time model is determined, the PI parameters are
explicitly given by a set of tuning formulae. In order to show the performance
and effectiveness of the proposed tuning formulae, they are applied to three
simulation examples.
| [
{
"created": "Mon, 14 Jun 2010 19:03:53 GMT",
"version": "v1"
}
] | 2010-06-15 | [
[
"Tavakoli",
"Saeed",
""
],
[
"Banookh",
"Amir",
""
]
] | This paper presents a set of robust PI tuning formulae for a first order plus dead time process using particle swarm optimization. Also, tuning formulae for an integrating process with dead time, which is a special case of a first order plus dead time process, is given. The design problem considers three essential requirements of control problems, namely load disturbance rejection, setpoint regulation and robustness of closed-loop system against model uncertainties. The primary design goal is to optimize load disturbance rejection. Robustness is guaranteed by requiring that the maximum sensitivity is less than or equal to a specified value. In the first step, PI controller parameters are determined such that the IAE criterion to a load disturbance step is minimized and the robustness constraint on maximum sensitivity is satisfied. Using a structure with two degrees of freedom which introduces an extra parameter, the setpoint weight, good setpoint regulation is achieved in the second step. The main advantage of the proposed method is its simplicity. Once the equivalent first order plus dead time model is determined, the PI parameters are explicitly given by a set of tuning formulae. In order to show the performance and effectiveness of the proposed tuning formulae, they are applied to three simulation examples. |
1509.04674 | Anastasios Papazafeiropoulos | Anastasios K. Papazafeiropoulos, Shree Krishna Sharma, and Symeon
Chatzinotas | Impact of Transceiver Impairments on the Capacity of Dual-Hop Relay
Massive MIMO Systems | 6 pages, 4 figures, accepted in IEEE Global Communications Conference
(GLOBECOM 2015) - Workshop on Massive MIMO: From theory to practice, 2015 | null | 10.1109/GLOCOMW.2015.7414137 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the deleterious effect of hardware impairments on communication
systems, most prior works have not investigated their impact on widely used
relay systems. Most importantly, the application of inexpensive transceivers,
being prone to hardware impairments, is the most cost-efficient way for the
implementation of massive multiple-input multiple-output (MIMO) systems.
Consequently, the direction of this paper is towards the investigation of the
impact of hardware impairments on MIMO relay networks with large number of
antennas. Specifically, we obtain the general expression for the ergodic
capacity of dual-hop (DH) amplify-and-forward (AF) relay systems. Next, given
the advantages of the free probability (FP) theory with comparison to other
known techniques in the area of large random matrix theory, we pursue a large
limit analysis in terms of number of antennas and users by shedding light to
the behavior of relay systems inflicted by hardware impairments.
| [
{
"created": "Tue, 15 Sep 2015 18:45:52 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Papazafeiropoulos",
"Anastasios K.",
""
],
[
"Sharma",
"Shree Krishna",
""
],
[
"Chatzinotas",
"Symeon",
""
]
] | Despite the deleterious effect of hardware impairments on communication systems, most prior works have not investigated their impact on widely used relay systems. Most importantly, the application of inexpensive transceivers, being prone to hardware impairments, is the most cost-efficient way for the implementation of massive multiple-input multiple-output (MIMO) systems. Consequently, the direction of this paper is towards the investigation of the impact of hardware impairments on MIMO relay networks with large number of antennas. Specifically, we obtain the general expression for the ergodic capacity of dual-hop (DH) amplify-and-forward (AF) relay systems. Next, given the advantages of the free probability (FP) theory with comparison to other known techniques in the area of large random matrix theory, we pursue a large limit analysis in terms of number of antennas and users by shedding light to the behavior of relay systems inflicted by hardware impairments. |
1801.08754 | Clare Llewellyn | Clare Llewellyn, Laura Cram, Adrian Favero, Robin L. Hill | For Whom the Bell Trolls: Troll Behaviour in the Twitter Brexit Debate | null | null | null | null | cs.SI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a review into automated and malicious activity Twitter released a list of
accounts that they believed were connected to state sponsored manipulation of
the 2016 American Election. This list details 2,752 accounts Twitter believed
to be controlled by Russian operatives. In the absence of a similar list of
operatives active within the debate on the 2016 UK referendum on membership of
the European Union (Brexit) we investigated the behaviour of the same American
Election focused accounts in the production of content related to the UK-EU
referendum. We found that within our dataset we had Brexit-related content from
419 of these accounts, leading to 3,485 identified tweets gathered between the
29th August 2015 and 3rd October 2017. The behaviour of the accounts altered
radically on the day of the referendum, shifting from generalised disruptive
tweeting to retweeting each other in order to amplify content produced by other
troll accounts. We also demonstrate that, while these accounts are, in general,
designed to resemble American citizens, accounts created in 2016 often
contained German locations and terms in the user profiles.
| [
{
"created": "Fri, 26 Jan 2018 11:02:26 GMT",
"version": "v1"
}
] | 2018-01-29 | [
[
"Llewellyn",
"Clare",
""
],
[
"Cram",
"Laura",
""
],
[
"Favero",
"Adrian",
""
],
[
"Hill",
"Robin L.",
""
]
] | In a review into automated and malicious activity Twitter released a list of accounts that they believed were connected to state sponsored manipulation of the 2016 American Election. This list details 2,752 accounts Twitter believed to be controlled by Russian operatives. In the absence of a similar list of operatives active within the debate on the 2016 UK referendum on membership of the European Union (Brexit) we investigated the behaviour of the same American Election focused accounts in the production of content related to the UK-EU referendum. We found that within our dataset we had Brexit-related content from 419 of these accounts, leading to 3,485 identified tweets gathered between the 29th August 2015 and 3rd October 2017. The behaviour of the accounts altered radically on the day of the referendum, shifting from generalised disruptive tweeting to retweeting each other in order to amplify content produced by other troll accounts. We also demonstrate that, while these accounts are, in general, designed to resemble American citizens, accounts created in 2016 often contained German locations and terms in the user profiles. |
2309.15111 | Margalit Glasgow | Margalit Glasgow | SGD Finds then Tunes Features in Two-Layer Neural Networks with
near-Optimal Sample Complexity: A Case Study in the XOR problem | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this work, we consider the optimization process of minibatch stochastic
gradient descent (SGD) on a 2-layer neural network with data separated by a
quadratic ground truth function. We prove that with data drawn from the
$d$-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function $y
= -x_ix_j$, it is possible to train to a population error $o(1)$ with $d
\:\text{polylog}(d)$ samples. Our result considers simultaneously training both
layers of the two-layer-neural network with ReLU activations via standard
minibatch SGD on the logistic loss. To our knowledge, this work is the first to
give a sample complexity of $\tilde{O}(d)$ for efficiently learning the XOR
function on isotropic data on a standard neural network with standard training.
Our main technique is showing that the network evolves in two phases: a
$\textit{signal-finding}$ phase where the network is small and many of the
neurons evolve independently to find features, and a $\textit{signal-heavy}$
phase, where SGD maintains and balances the features. We leverage the
simultaneous training of the layers to show that it is sufficient for only a
small fraction of the neurons to learn features, since those neurons will be
amplified by the simultaneous growth of their second layer weights.
| [
{
"created": "Tue, 26 Sep 2023 17:57:44 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Oct 2023 14:21:45 GMT",
"version": "v2"
}
] | 2023-10-03 | [
[
"Glasgow",
"Margalit",
""
]
] | In this work, we consider the optimization process of minibatch stochastic gradient descent (SGD) on a 2-layer neural network with data separated by a quadratic ground truth function. We prove that with data drawn from the $d$-dimensional Boolean hypercube labeled by the quadratic ``XOR'' function $y = -x_ix_j$, it is possible to train to a population error $o(1)$ with $d \:\text{polylog}(d)$ samples. Our result considers simultaneously training both layers of the two-layer-neural network with ReLU activations via standard minibatch SGD on the logistic loss. To our knowledge, this work is the first to give a sample complexity of $\tilde{O}(d)$ for efficiently learning the XOR function on isotropic data on a standard neural network with standard training. Our main technique is showing that the network evolves in two phases: a $\textit{signal-finding}$ phase where the network is small and many of the neurons evolve independently to find features, and a $\textit{signal-heavy}$ phase, where SGD maintains and balances the features. We leverage the simultaneous training of the layers to show that it is sufficient for only a small fraction of the neurons to learn features, since those neurons will be amplified by the simultaneous growth of their second layer weights. |
2004.02753 | Joshua Knights Mr | Joshua Knights, Ben Harwood, Daniel Ward, Anthony Vanderkop, Olivia
Mackenzie-Ross, Peyman Moghadam | Temporally Coherent Embeddings for Self-Supervised Video Representation
Learning | Accepted at ICPR 2020. Project page:
https://csiro-robotics.github.io/TCE-Webpage/ | null | null | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents TCE: Temporally Coherent Embeddings for self-supervised
video representation learning. The proposed method exploits inherent structure
of unlabeled video data to explicitly enforce temporal coherency in the
embedding space, rather than indirectly learning it through ranking or
predictive proxy tasks. In the same way that high-level visual information in
the world changes smoothly, we believe that nearby frames in learned
representations will benefit from demonstrating similar properties. Using this
assumption, we train our TCE model to encode videos such that adjacent frames
exist close to each other and videos are separated from one another. Using TCE
we learn robust representations from large quantities of unlabeled video data.
We thoroughly analyse and evaluate our self-supervised learned TCE models on a
downstream task of video action recognition using multiple challenging
benchmarks (Kinetics400, UCF101, HMDB51). With a simple but effective 2D-CNN
backbone and only RGB stream inputs, TCE pre-trained representations outperform
all previous selfsupervised 2D-CNN and 3D-CNN pre-trained on UCF101. The code
and pre-trained models for this paper can be downloaded at:
https://github.com/csiro-robotics/TCE
| [
{
"created": "Sat, 21 Mar 2020 12:25:50 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2020 00:24:07 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Jul 2020 09:03:55 GMT",
"version": "v3"
},
{
"created": "Tue, 11 Aug 2020 05:48:04 GMT",
"version": "v4"
},
{
"created": "Tue, 17 Nov 2020 04:21:35 GMT",
"version": "v5"
}
] | 2020-11-18 | [
[
"Knights",
"Joshua",
""
],
[
"Harwood",
"Ben",
""
],
[
"Ward",
"Daniel",
""
],
[
"Vanderkop",
"Anthony",
""
],
[
"Mackenzie-Ross",
"Olivia",
""
],
[
"Moghadam",
"Peyman",
""
]
] | This paper presents TCE: Temporally Coherent Embeddings for self-supervised video representation learning. The proposed method exploits inherent structure of unlabeled video data to explicitly enforce temporal coherency in the embedding space, rather than indirectly learning it through ranking or predictive proxy tasks. In the same way that high-level visual information in the world changes smoothly, we believe that nearby frames in learned representations will benefit from demonstrating similar properties. Using this assumption, we train our TCE model to encode videos such that adjacent frames exist close to each other and videos are separated from one another. Using TCE we learn robust representations from large quantities of unlabeled video data. We thoroughly analyse and evaluate our self-supervised learned TCE models on a downstream task of video action recognition using multiple challenging benchmarks (Kinetics400, UCF101, HMDB51). With a simple but effective 2D-CNN backbone and only RGB stream inputs, TCE pre-trained representations outperform all previous selfsupervised 2D-CNN and 3D-CNN pre-trained on UCF101. The code and pre-trained models for this paper can be downloaded at: https://github.com/csiro-robotics/TCE |
2007.00959 | Mingyuan Jiu | Mingyuan Jiu, Nelly Pustelnik | A deep primal-dual proximal network for image restoration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image restoration remains a challenging task in image processing. Numerous
methods tackle this problem, often solved by minimizing a non-smooth penalized
co-log-likelihood function. Although the solution is easily interpretable with
theoretic guarantees, its estimation relies on an optimization process that can
take time. Considering the research effort in deep learning for image
classification and segmentation, this class of methods offers a serious
alternative to perform image restoration but stays challenging to solve inverse
problems. In this work, we design a deep network, named DeepPDNet, built from
primal-dual proximal iterations associated with the minimization of a standard
penalized likelihood with an analysis prior, allowing us to take advantage of
both worlds.
We reformulate a specific instance of the Condat-Vu primal-dual hybrid
gradient (PDHG) algorithm as a deep network with fixed layers. The learned
parameters are both the PDHG algorithm step-sizes and the analysis linear
operator involved in the penalization (including the regularization parameter).
These parameters are allowed to vary from a layer to another one. Two different
learning strategies: "Full learning" and "Partial learning" are proposed, the
first one is the most efficient numerically while the second one relies on
standard constraints ensuring convergence in the standard PDHG iterations.
Moreover, global and local sparse analysis prior are studied to seek a better
feature representation. We apply the proposed methods to image restoration on
the MNIST and BSD68 datasets and to single image super-resolution on the BSD100
and SET14 datasets. Extensive results show that the proposed DeepPDNet
demonstrates excellent performance on the MNIST and the more complex BSD68,
BSD100, and SET14 datasets for image restoration and single image
super-resolution task.
| [
{
"created": "Thu, 2 Jul 2020 08:29:52 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 06:10:23 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Dec 2021 14:12:15 GMT",
"version": "v3"
}
] | 2021-12-21 | [
[
"Jiu",
"Mingyuan",
""
],
[
"Pustelnik",
"Nelly",
""
]
] | Image restoration remains a challenging task in image processing. Numerous methods tackle this problem, often solved by minimizing a non-smooth penalized co-log-likelihood function. Although the solution is easily interpretable with theoretic guarantees, its estimation relies on an optimization process that can take time. Considering the research effort in deep learning for image classification and segmentation, this class of methods offers a serious alternative to perform image restoration but stays challenging to solve inverse problems. In this work, we design a deep network, named DeepPDNet, built from primal-dual proximal iterations associated with the minimization of a standard penalized likelihood with an analysis prior, allowing us to take advantage of both worlds. We reformulate a specific instance of the Condat-Vu primal-dual hybrid gradient (PDHG) algorithm as a deep network with fixed layers. The learned parameters are both the PDHG algorithm step-sizes and the analysis linear operator involved in the penalization (including the regularization parameter). These parameters are allowed to vary from a layer to another one. Two different learning strategies: "Full learning" and "Partial learning" are proposed, the first one is the most efficient numerically while the second one relies on standard constraints ensuring convergence in the standard PDHG iterations. Moreover, global and local sparse analysis prior are studied to seek a better feature representation. We apply the proposed methods to image restoration on the MNIST and BSD68 datasets and to single image super-resolution on the BSD100 and SET14 datasets. Extensive results show that the proposed DeepPDNet demonstrates excellent performance on the MNIST and the more complex BSD68, BSD100, and SET14 datasets for image restoration and single image super-resolution task. |
1810.09729 | Mohammed Ali Al-Garadi Dr | Reza Shakeri, Mohammed Ali Al-Garadi, Ahmed Badawy, Amr Mohamed, Tamer
Khattab, Abdulla Al-Ali, Khaled A. Harras, Mohsen Guizani | Design Challenges of Multi-UAV Systems in Cyber-Physical Applications: A
Comprehensive Survey, and Future Directions | null | null | null | null | cs.RO cs.AI cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned Aerial Vehicles (UAVs) have recently rapidly grown to facilitate a
wide range of innovative applications that can fundamentally change the way
cyber-physical systems (CPSs) are designed. CPSs are a modern generation of
systems with synergic cooperation between computational and physical potentials
that can interact with humans through several new mechanisms. The main
advantages of using UAVs in CPS application is their exceptional features,
including their mobility, dynamism, effortless deployment, adaptive altitude,
agility, adjustability, and effective appraisal of real-world functions anytime
and anywhere. Furthermore, from the technology perspective, UAVs are predicted
to be a vital element of the development of advanced CPSs. Therefore, in this
survey, we aim to pinpoint the most fundamental and important design challenges
of multi-UAV systems for CPS applications. We highlight key and versatile
aspects that span the coverage and tracking of targets and infrastructure
objects, energy-efficient navigation, and image analysis using machine learning
for fine-grained CPS applications. Key prototypes and testbeds are also
investigated to show how these practical technologies can facilitate CPS
applications. We present and propose state-of-the-art algorithms to address
design challenges with both quantitative and qualitative methods and map these
challenges with important CPS applications to draw insightful conclusions on
the challenges of each application. Finally, we summarize potential new
directions and ideas that could shape future research in these areas.
| [
{
"created": "Tue, 23 Oct 2018 08:51:54 GMT",
"version": "v1"
}
] | 2018-10-24 | [
[
"Shakeri",
"Reza",
""
],
[
"Al-Garadi",
"Mohammed Ali",
""
],
[
"Badawy",
"Ahmed",
""
],
[
"Mohamed",
"Amr",
""
],
[
"Khattab",
"Tamer",
""
],
[
"Al-Ali",
"Abdulla",
""
],
[
"Harras",
"Khaled A.",
""
],
[
"Guizani",
"Mohsen",
""
]
] | Unmanned Aerial Vehicles (UAVs) have recently rapidly grown to facilitate a wide range of innovative applications that can fundamentally change the way cyber-physical systems (CPSs) are designed. CPSs are a modern generation of systems with synergic cooperation between computational and physical potentials that can interact with humans through several new mechanisms. The main advantages of using UAVs in CPS application is their exceptional features, including their mobility, dynamism, effortless deployment, adaptive altitude, agility, adjustability, and effective appraisal of real-world functions anytime and anywhere. Furthermore, from the technology perspective, UAVs are predicted to be a vital element of the development of advanced CPSs. Therefore, in this survey, we aim to pinpoint the most fundamental and important design challenges of multi-UAV systems for CPS applications. We highlight key and versatile aspects that span the coverage and tracking of targets and infrastructure objects, energy-efficient navigation, and image analysis using machine learning for fine-grained CPS applications. Key prototypes and testbeds are also investigated to show how these practical technologies can facilitate CPS applications. We present and propose state-of-the-art algorithms to address design challenges with both quantitative and qualitative methods and map these challenges with important CPS applications to draw insightful conclusions on the challenges of each application. Finally, we summarize potential new directions and ideas that could shape future research in these areas. |
2205.01324 | Alon Berliner | Alon Berliner, Guy Rotman, Yossi Adi, Roi Reichart, Tamir Hazan | Learning Discrete Structured Variational Auto-Encoder using Natural
Evolution Strategies | Published as a conference paper at ICLR 2022 | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discrete variational auto-encoders (VAEs) are able to represent semantic
latent spaces in generative learning. In many real-life settings, the discrete
latent space consists of high-dimensional structures, and propagating gradients
through the relevant structures often requires enumerating over an
exponentially large latent space. Recently, various approaches were devised to
propagate approximated gradients without enumerating over the space of possible
structures. In this work, we use Natural Evolution Strategies (NES), a class of
gradient-free black-box optimization algorithms, to learn discrete structured
VAEs. The NES algorithms are computationally appealing as they estimate
gradients with forward pass evaluations only, thus they do not require to
propagate gradients through their discrete structures. We demonstrate
empirically that optimizing discrete structured VAEs using NES is as effective
as gradient-based approximations. Lastly, we prove NES converges for
non-Lipschitz functions as appear in discrete structured VAEs.
| [
{
"created": "Tue, 3 May 2022 06:21:40 GMT",
"version": "v1"
}
] | 2022-05-04 | [
[
"Berliner",
"Alon",
""
],
[
"Rotman",
"Guy",
""
],
[
"Adi",
"Yossi",
""
],
[
"Reichart",
"Roi",
""
],
[
"Hazan",
"Tamir",
""
]
] | Discrete variational auto-encoders (VAEs) are able to represent semantic latent spaces in generative learning. In many real-life settings, the discrete latent space consists of high-dimensional structures, and propagating gradients through the relevant structures often requires enumerating over an exponentially large latent space. Recently, various approaches were devised to propagate approximated gradients without enumerating over the space of possible structures. In this work, we use Natural Evolution Strategies (NES), a class of gradient-free black-box optimization algorithms, to learn discrete structured VAEs. The NES algorithms are computationally appealing as they estimate gradients with forward pass evaluations only, thus they do not require to propagate gradients through their discrete structures. We demonstrate empirically that optimizing discrete structured VAEs using NES is as effective as gradient-based approximations. Lastly, we prove NES converges for non-Lipschitz functions as appear in discrete structured VAEs. |
2006.00577 | Alessandro Ecclesie Agazzi | Alessandro Ecclesie Agazzi | Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them | null | null | null | null | cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phishing attacks have become the most used technique in the online scams,
initiating more than 91% of cyberattacks, from 2012 onwards. This study reviews
how Phishing and Spear Phishing attacks are carried out by the phishers,
through 5 steps which magnify the outcome, increasing the chance of success.
The focus will be also given on four different layers of protection against
these social engineering attacks, showing their strengths and weaknesses; the
first and second layers consist of automated tools and decision-aid tools. the
third one is users' knowledge and expertise to deal with potential threats. The
last layer, defined as "external", will underline the importance of having a
Multi-factor authentication, an effective way to provide an enhanced security,
creating a further layer of protection against Phishing and Spear Phishing.
| [
{
"created": "Sun, 31 May 2020 18:10:09 GMT",
"version": "v1"
}
] | 2020-06-02 | [
[
"Agazzi",
"Alessandro Ecclesie",
""
]
] | Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards. This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome, increasing the chance of success. The focus will be also given on four different layers of protection against these social engineering attacks, showing their strengths and weaknesses; the first and second layers consist of automated tools and decision-aid tools. the third one is users' knowledge and expertise to deal with potential threats. The last layer, defined as "external", will underline the importance of having a Multi-factor authentication, an effective way to provide an enhanced security, creating a further layer of protection against Phishing and Spear Phishing. |
1003.3689 | Murat Manguoglu | Murat Manguoglu | A Highly Efficient Parallel Algorithm for Computing the Fiedler Vector | This paper has been withdrawn by the author because it is under
revision | null | null | null | cs.NA cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper has been withdrawn by the author.
| [
{
"created": "Thu, 18 Mar 2010 22:56:57 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Feb 2013 19:44:27 GMT",
"version": "v2"
}
] | 2015-03-13 | [
[
"Manguoglu",
"Murat",
""
]
] | This paper has been withdrawn by the author. |
1404.2943 | Thomas Bl\"asius | Thomas Bl\"asius, Sebastian Lehmann, Ignaz Rutter | Orthogonal Graph Drawing with Inflexible Edges | 23 pages, 5 figures | null | null | null | cs.DS cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of creating plane orthogonal drawings of 4-planar
graphs (planar graphs with maximum degree 4) with constraints on the number of
bends per edge. More precisely, we have a flexibility function assigning to
each edge $e$ a natural number $\mathrm{flex}(e)$, its flexibility. The problem
FlexDraw asks whether there exists an orthogonal drawing such that each edge
$e$ has at most $\mathrm{flex}(e)$ bends. It is known that FlexDraw is NP-hard
if $\mathrm{flex}(e) = 0$ for every edge $e$. On the other hand, FlexDraw can
be solved efficiently if $\mathrm{flex}(e) \ge 1$ and is trivial if
$\mathrm{flex}(e) \ge 2$ for every edge $e$.
To close the gap between the NP-hardness for $\mathrm{flex}(e) = 0$ and the
efficient algorithm for $\mathrm{flex}(e) \ge 1$, we investigate the
computational complexity of FlexDraw in case only few edges are inflexible
(i.e., have flexibility~$0$). We show that for any $\varepsilon > 0$ FlexDraw
is NP-complete for instances with $O(n^\varepsilon)$ inflexible edges with
pairwise distance $\Omega(n^{1-\varepsilon})$ (including the case where they
induce a matching). On the other hand, we give an FPT-algorithm with running
time $O(2^k\cdot n \cdot T_{\mathrm{flow}}(n))$, where $T_{\mathrm{flow}}(n)$
is the time necessary to compute a maximum flow in a planar flow network with
multiple sources and sinks, and $k$ is the number of inflexible edges having at
least one endpoint of degree 4.
| [
{
"created": "Thu, 10 Apr 2014 20:24:06 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jan 2015 16:03:13 GMT",
"version": "v2"
}
] | 2015-01-08 | [
[
"Bläsius",
"Thomas",
""
],
[
"Lehmann",
"Sebastian",
""
],
[
"Rutter",
"Ignaz",
""
]
] | We consider the problem of creating plane orthogonal drawings of 4-planar graphs (planar graphs with maximum degree 4) with constraints on the number of bends per edge. More precisely, we have a flexibility function assigning to each edge $e$ a natural number $\mathrm{flex}(e)$, its flexibility. The problem FlexDraw asks whether there exists an orthogonal drawing such that each edge $e$ has at most $\mathrm{flex}(e)$ bends. It is known that FlexDraw is NP-hard if $\mathrm{flex}(e) = 0$ for every edge $e$. On the other hand, FlexDraw can be solved efficiently if $\mathrm{flex}(e) \ge 1$ and is trivial if $\mathrm{flex}(e) \ge 2$ for every edge $e$. To close the gap between the NP-hardness for $\mathrm{flex}(e) = 0$ and the efficient algorithm for $\mathrm{flex}(e) \ge 1$, we investigate the computational complexity of FlexDraw in case only few edges are inflexible (i.e., have flexibility~$0$). We show that for any $\varepsilon > 0$ FlexDraw is NP-complete for instances with $O(n^\varepsilon)$ inflexible edges with pairwise distance $\Omega(n^{1-\varepsilon})$ (including the case where they induce a matching). On the other hand, we give an FPT-algorithm with running time $O(2^k\cdot n \cdot T_{\mathrm{flow}}(n))$, where $T_{\mathrm{flow}}(n)$ is the time necessary to compute a maximum flow in a planar flow network with multiple sources and sinks, and $k$ is the number of inflexible edges having at least one endpoint of degree 4. |
1901.10645 | Sara Rouhani | Sara Rouhani, Luke Butterworth, Adam D. Simmons, Darryl G. Humphery,
and Ralph Deters | MediChainTM: A Secure Decentralized Medical Data Asset Management System | 2018 IEEE Confs on Internet of Things, Green Computing and
Communications, Cyber, Physical and Social Computing, Smart Data, Blockchain,
Computer and Information Technology, Congress on Cybermatics | null | 10.1109/Cybermatics_2018.2018.00258 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The set of distributed ledger architectures known as blockchain is best known
for cryptocurrency applications such as Bitcoin and Ethereum. These
permissionless block chains are showing the potential to be disruptive to the
financial services industry. Their broader adoption is likely to be limited by
the maximum block size, the cost of the Proof of Work consensus mechanism, and
the increasing size of any given chain overwhelming most of the participating
nodes. These factors have led to many cryptocurrency blockchains to become
centralized in the nodes with enough computing power and storage to be a
dominant miner and validator. Permissioned chains operate in trusted
environments and can, therefore, avoid the computationally expensive consensus
mechanisms. Permissioned chains are still susceptible to asset storage demands
and non-standard user interfaces that will impede their adoption. This paper
describes an approach to addressing these limitations: permissioned blockchain
that uses off-chain storage of the data assets and this is accessed through a
standard browser and mobile app. The implementation in the Hyperledger
framework is described as is an example use of patient-centered health data
management.
| [
{
"created": "Wed, 30 Jan 2019 02:22:07 GMT",
"version": "v1"
}
] | 2019-01-31 | [
[
"Rouhani",
"Sara",
""
],
[
"Butterworth",
"Luke",
""
],
[
"Simmons",
"Adam D.",
""
],
[
"Humphery",
"Darryl G.",
""
],
[
"Deters",
"Ralph",
""
]
] | The set of distributed ledger architectures known as blockchain is best known for cryptocurrency applications such as Bitcoin and Ethereum. These permissionless block chains are showing the potential to be disruptive to the financial services industry. Their broader adoption is likely to be limited by the maximum block size, the cost of the Proof of Work consensus mechanism, and the increasing size of any given chain overwhelming most of the participating nodes. These factors have led to many cryptocurrency blockchains to become centralized in the nodes with enough computing power and storage to be a dominant miner and validator. Permissioned chains operate in trusted environments and can, therefore, avoid the computationally expensive consensus mechanisms. Permissioned chains are still susceptible to asset storage demands and non-standard user interfaces that will impede their adoption. This paper describes an approach to addressing these limitations: permissioned blockchain that uses off-chain storage of the data assets and this is accessed through a standard browser and mobile app. The implementation in the Hyperledger framework is described as is an example use of patient-centered health data management. |
0811.1335 | Mugurel Ionut Andreica | Mugurel Ionut Andreica | Algorithmic Techniques for Several Optimization Problems Regarding
Distributed Systems with Tree Topologies | The 16th International Conference on Applied and Industrial
Mathematics, Oradea, Romania, 9-11 October, 2008. ROMAI Journal, vol. 4,
2008. (ISSN: 841-5512). In Press | ROMAI Journal, vol. 4, no. 1, pp. 1-25, 2008 (ISSN: 1841-5512) ;
http://www.romai.ro | null | null | cs.DS cs.DM cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the development of distributed systems progresses, more and more
challenges arise and the need for developing optimized systems and for
optimizing existing systems from multiple perspectives becomes more stringent.
In this paper I present novel algorithmic techniques for solving several
optimization problems regarding distributed systems with tree topologies. I
address topics like: reliability improvement, partitioning, coloring, content
delivery, optimal matchings, as well as some tree counting aspects. Some of the
presented techniques are only of theoretical interest, while others can be used
in practical settings.
| [
{
"created": "Sun, 9 Nov 2008 12:59:45 GMT",
"version": "v1"
}
] | 2009-03-21 | [
[
"Andreica",
"Mugurel Ionut",
""
]
] | As the development of distributed systems progresses, more and more challenges arise and the need for developing optimized systems and for optimizing existing systems from multiple perspectives becomes more stringent. In this paper I present novel algorithmic techniques for solving several optimization problems regarding distributed systems with tree topologies. I address topics like: reliability improvement, partitioning, coloring, content delivery, optimal matchings, as well as some tree counting aspects. Some of the presented techniques are only of theoretical interest, while others can be used in practical settings. |
2405.07621 | Satheesh Kumar Perepu Dr | Kaushik Dey, Satheesh K. Perepu, Abir Das, Pallab Dasgupta | Towards Adaptive IMFs -- Generalization of utility functions in
Multi-Agent Frameworks | Accepted in Netsoft-2024 conference | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Intent Management Function (IMF) is an integral part of future-generation
networks. In recent years, there has been some work on AI-based IMFs that can
handle conflicting intents and prioritize the global objective based on apriori
definition of the utility function and accorded priorities for competing
intents. Some of the earlier works use Multi-Agent Reinforcement Learning
(MARL) techniques with AdHoc Teaming (AHT) approaches for efficient conflict
handling in IMF. However, the success of such frameworks in real-life scenarios
requires them to be flexible to business situations. The intent priorities can
change and the utility function, which measures the extent of intent
fulfilment, may also vary in definition. This paper proposes a novel mechanism
whereby the IMF can generalize to different forms of utility functions and
change of intent priorities at run-time without additional training. Such
generalization ability, without additional training requirements, would help to
deploy IMF in live networks where customer intents and priorities change
frequently. Results on the network emulator demonstrate the efficacy of the
approach, scalability for new intents, outperforming existing techniques that
require additional training to achieve the same degree of flexibility thereby
saving cost, and increasing efficiency and adaptability.
| [
{
"created": "Mon, 13 May 2024 10:27:11 GMT",
"version": "v1"
},
{
"created": "Tue, 14 May 2024 06:29:36 GMT",
"version": "v2"
}
] | 2024-05-15 | [
[
"Dey",
"Kaushik",
""
],
[
"Perepu",
"Satheesh K.",
""
],
[
"Das",
"Abir",
""
],
[
"Dasgupta",
"Pallab",
""
]
] | Intent Management Function (IMF) is an integral part of future-generation networks. In recent years, there has been some work on AI-based IMFs that can handle conflicting intents and prioritize the global objective based on apriori definition of the utility function and accorded priorities for competing intents. Some of the earlier works use Multi-Agent Reinforcement Learning (MARL) techniques with AdHoc Teaming (AHT) approaches for efficient conflict handling in IMF. However, the success of such frameworks in real-life scenarios requires them to be flexible to business situations. The intent priorities can change and the utility function, which measures the extent of intent fulfilment, may also vary in definition. This paper proposes a novel mechanism whereby the IMF can generalize to different forms of utility functions and change of intent priorities at run-time without additional training. Such generalization ability, without additional training requirements, would help to deploy IMF in live networks where customer intents and priorities change frequently. Results on the network emulator demonstrate the efficacy of the approach, scalability for new intents, outperforming existing techniques that require additional training to achieve the same degree of flexibility thereby saving cost, and increasing efficiency and adaptability. |
1807.02800 | Pascal Mettes | Pascal Mettes and Cees G. M. Snoek | Spatio-Temporal Instance Learning: Action Tubes from Class Supervision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this work is spatio-temporal action localization in videos, using
only the supervision from video-level class labels. The state-of-the-art casts
this weakly-supervised action localization regime as a Multiple Instance
Learning problem, where instances are a priori computed spatio-temporal
proposals. Rather than disconnecting the spatio-temporal learning from the
training, we propose Spatio-Temporal Instance Learning, which enables action
localization directly from box proposals in video frames. We outline the
assumptions of our model and propose a max-margin objective and optimization
with latent variables that enable spatio-temporal learning of actions from
video labels. We also provide an efficient linking algorithm and two reranking
strategies to facilitate and further improve the action localization.
Experimental evaluation on four action datasets demonstrate the effectiveness
of our approach for localization from weak supervision. Moreover, we show how
to incorporate other supervision levels and mixtures, as a step towards
determining optimal supervision strategies for action localization.
| [
{
"created": "Sun, 8 Jul 2018 11:12:51 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Nov 2018 21:13:28 GMT",
"version": "v2"
}
] | 2018-11-26 | [
[
"Mettes",
"Pascal",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | The goal of this work is spatio-temporal action localization in videos, using only the supervision from video-level class labels. The state-of-the-art casts this weakly-supervised action localization regime as a Multiple Instance Learning problem, where instances are a priori computed spatio-temporal proposals. Rather than disconnecting the spatio-temporal learning from the training, we propose Spatio-Temporal Instance Learning, which enables action localization directly from box proposals in video frames. We outline the assumptions of our model and propose a max-margin objective and optimization with latent variables that enable spatio-temporal learning of actions from video labels. We also provide an efficient linking algorithm and two reranking strategies to facilitate and further improve the action localization. Experimental evaluation on four action datasets demonstrate the effectiveness of our approach for localization from weak supervision. Moreover, we show how to incorporate other supervision levels and mixtures, as a step towards determining optimal supervision strategies for action localization. |
2402.01126 | Douglas Poland | Douglas Poland and Amar Saini | Seeing Objects in a Cluttered World: Computational Objectness from
Motion in Video | 10 pages, 11 figures, plus 18 pages of Supplemental Information | null | null | LLNL-JRNL-859920 | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Perception of the visually disjoint surfaces of our cluttered world as whole
objects, physically distinct from those overlapping them, is a cognitive
phenomenon called objectness that forms the basis of our visual perception.
Shared by all vertebrates and present at birth in humans, it enables
object-centric representation and reasoning about the visual world. We present
a computational approach to objectness that leverages motion cues and
spatio-temporal attention using a pair of supervised spatio-temporal
R(2+1)U-Nets. The first network detects motion boundaries and classifies the
pixels at those boundaries in terms of their local foreground-background sense.
This motion boundary sense (MBS) information is passed, along with a
spatio-temporal object attention cue, to an attentional surface perception
(ASP) module which infers the form of the attended object over a sequence of
frames and classifies its 'pixels' as visible or obscured. The spatial form of
the attention cue is flexible, but it must loosely track the attended object
which need not be visible. We demonstrate the ability of this simple but novel
approach to infer objectness from phenomenology without object models, and show
that it delivers robust perception of individual attended objects in cluttered
scenes, even with blur and camera shake. We show that our data diversity and
augmentation minimizes bias and facilitates transfer to real video. Finally, we
describe how this computational objectness capability can grow in
sophistication and anchor a robust modular video object perception framework.
| [
{
"created": "Fri, 2 Feb 2024 03:57:11 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Poland",
"Douglas",
""
],
[
"Saini",
"Amar",
""
]
] | Perception of the visually disjoint surfaces of our cluttered world as whole objects, physically distinct from those overlapping them, is a cognitive phenomenon called objectness that forms the basis of our visual perception. Shared by all vertebrates and present at birth in humans, it enables object-centric representation and reasoning about the visual world. We present a computational approach to objectness that leverages motion cues and spatio-temporal attention using a pair of supervised spatio-temporal R(2+1)U-Nets. The first network detects motion boundaries and classifies the pixels at those boundaries in terms of their local foreground-background sense. This motion boundary sense (MBS) information is passed, along with a spatio-temporal object attention cue, to an attentional surface perception (ASP) module which infers the form of the attended object over a sequence of frames and classifies its 'pixels' as visible or obscured. The spatial form of the attention cue is flexible, but it must loosely track the attended object which need not be visible. We demonstrate the ability of this simple but novel approach to infer objectness from phenomenology without object models, and show that it delivers robust perception of individual attended objects in cluttered scenes, even with blur and camera shake. We show that our data diversity and augmentation minimizes bias and facilitates transfer to real video. Finally, we describe how this computational objectness capability can grow in sophistication and anchor a robust modular video object perception framework. |
2110.12661 | Jiawei Zhao | Jiawei Zhao, Florian Sch\"afer, Anima Anandkumar | ZerO Initialization: Initializing Neural Networks with only Zeros and
Ones | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are usually initialized with random weights, with
adequately selected initial variance to ensure stable signal propagation during
training. However, selecting the appropriate variance becomes challenging
especially as the number of layers grows. In this work, we replace random
weight initialization with a fully deterministic initialization scheme, viz.,
ZerO, which initializes the weights of networks with only zeros and ones (up to
a normalization factor), based on identity and Hadamard transforms. Through
both theoretical and empirical studies, we demonstrate that ZerO is able to
train networks without damaging their expressivity. Applying ZerO on ResNet
achieves state-of-the-art performance on various datasets, including ImageNet,
which suggests random weights may be unnecessary for network initialization. In
addition, ZerO has many benefits, such as training ultra deep networks (without
batch-normalization), exhibiting low-rank learning trajectories that result in
low-rank and sparse solutions, and improving training reproducibility.
| [
{
"created": "Mon, 25 Oct 2021 06:17:33 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Aug 2022 03:00:36 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Nov 2022 17:17:26 GMT",
"version": "v3"
}
] | 2022-11-07 | [
[
"Zhao",
"Jiawei",
""
],
[
"Schäfer",
"Florian",
""
],
[
"Anandkumar",
"Anima",
""
]
] | Deep neural networks are usually initialized with random weights, with adequately selected initial variance to ensure stable signal propagation during training. However, selecting the appropriate variance becomes challenging especially as the number of layers grows. In this work, we replace random weight initialization with a fully deterministic initialization scheme, viz., ZerO, which initializes the weights of networks with only zeros and ones (up to a normalization factor), based on identity and Hadamard transforms. Through both theoretical and empirical studies, we demonstrate that ZerO is able to train networks without damaging their expressivity. Applying ZerO on ResNet achieves state-of-the-art performance on various datasets, including ImageNet, which suggests random weights may be unnecessary for network initialization. In addition, ZerO has many benefits, such as training ultra deep networks (without batch-normalization), exhibiting low-rank learning trajectories that result in low-rank and sparse solutions, and improving training reproducibility. |
2312.10624 | Jie JW Wu PhD | Jie JW Wu | AutoOffAB: Toward Automated Offline A/B Testing for Data-Driven
Requirement Engineering | 5 pages, 2 figures. Accepted at FSE 2024 (32nd ACM International
Conference on the Foundations of Software Engineering) | 32nd ACM International Conference on the Foundations of Software
Engineering (FSE 2024) | 10.1145/3663529.3663780 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software companies have widely used online A/B testing to evaluate the impact
of a new technology by offering it to groups of users and comparing it against
the unmodified product. However, running online A/B testing needs not only
efforts in design, implementation, and stakeholders' approval to be served in
production but also several weeks to collect the data in iterations. To address
these issues, a recently emerging topic, called "Offline A/B Testing", is
getting increasing attention, intending to conduct the offline evaluation of
new technologies by estimating historical logged data. Although this approach
is promising due to lower implementation effort, faster turnaround time, and no
potential user harm, for it to be effectively prioritized as requirements in
practice, several limitations need to be addressed, including its discrepancy
with online A/B test results, and lack of systematic updates on varying data
and parameters. In response, in this vision paper, I introduce AutoOffAB, an
idea to automatically run variants of offline A/B testing against recent
logging and update the offline evaluation results, which are used to make
decisions on requirements more reliably and systematically.
| [
{
"created": "Sun, 17 Dec 2023 06:49:14 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Aug 2024 08:17:37 GMT",
"version": "v2"
}
] | 2024-08-12 | [
[
"Wu",
"Jie JW",
""
]
] | Software companies have widely used online A/B testing to evaluate the impact of a new technology by offering it to groups of users and comparing it against the unmodified product. However, running online A/B testing needs not only efforts in design, implementation, and stakeholders' approval to be served in production but also several weeks to collect the data in iterations. To address these issues, a recently emerging topic, called "Offline A/B Testing", is getting increasing attention, intending to conduct the offline evaluation of new technologies by estimating historical logged data. Although this approach is promising due to lower implementation effort, faster turnaround time, and no potential user harm, for it to be effectively prioritized as requirements in practice, several limitations need to be addressed, including its discrepancy with online A/B test results, and lack of systematic updates on varying data and parameters. In response, in this vision paper, I introduce AutoOffAB, an idea to automatically run variants of offline A/B testing against recent logging and update the offline evaluation results, which are used to make decisions on requirements more reliably and systematically. |
1609.03773 | Li Cheng | Chi Xu, Lakshmi Narasimhan Govindarajan, Yu Zhang, Li Cheng | Lie-X: Depth Image Based Articulated Object Pose Estimation, Tracking,
and Action Recognition on Lie Groups | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pose estimation, tracking, and action recognition of articulated objects from
depth images are important and challenging problems, which are normally
considered separately. In this paper, a unified paradigm based on Lie group
theory is proposed, which enables us to collectively address these related
problems. Our approach is also applicable to a wide range of articulated
objects. Empirically it is evaluated on lab animals including mouse and fish,
as well as on human hand. On these applications, it is shown to deliver
competitive results compared to the state-of-the-arts, and non-trivial
baselines including convolutional neural networks and regression forest
methods.
| [
{
"created": "Tue, 13 Sep 2016 11:36:26 GMT",
"version": "v1"
}
] | 2016-09-14 | [
[
"Xu",
"Chi",
""
],
[
"Govindarajan",
"Lakshmi Narasimhan",
""
],
[
"Zhang",
"Yu",
""
],
[
"Cheng",
"Li",
""
]
] | Pose estimation, tracking, and action recognition of articulated objects from depth images are important and challenging problems, which are normally considered separately. In this paper, a unified paradigm based on Lie group theory is proposed, which enables us to collectively address these related problems. Our approach is also applicable to a wide range of articulated objects. Empirically it is evaluated on lab animals including mouse and fish, as well as on human hand. On these applications, it is shown to deliver competitive results compared to the state-of-the-arts, and non-trivial baselines including convolutional neural networks and regression forest methods. |
cs/0012014 | Gyongyi Szilagyi | Gyongyi Szilagyi, Tibor Gyimothy and Jan Maluszynski | Slicing of Constraint Logic Programs | In M. Ducasse (ed), proceedings of the Fourth International Workshop
on Automated Debugging (AADEBUG 2000), August 2000, Munich. cs.SE/0010035 | null | null | null | cs.SE | null | Slicing is a program analysis technique originally developed for imperative
languages. It facilitates understanding of data flow and debugging.
This paper discusses slicing of Constraint Logic Programs. Constraint Logic
Programming (CLP) is an emerging software technology with a growing number of
applications. Data flow in constraint programs is not explicit, and for this
reason the concepts of slice and the slicing techniques of imperative languages
are not directly applicable.
This paper formulates declarative notions of slice suitable for CLP. They
provide a basis for defining slicing techniques (both dynamic and static) based
on variable sharing. The techniques are further extended by using groundness
information.
A prototype dynamic slicer of CLP programs implementing the presented ideas
is briefly described together with the results of some slicing experiments.
| [
{
"created": "Mon, 18 Dec 2000 11:59:31 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Szilagyi",
"Gyongyi",
""
],
[
"Gyimothy",
"Tibor",
""
],
[
"Maluszynski",
"Jan",
""
]
] | Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging. This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable. This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information. A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments. |
2403.05493 | Agnes Luhtaru | Agnes Luhtaru, Taido Purason, Martin Vainikko, Maksym Del, Mark Fishel | To Err Is Human, but Llamas Can Learn It Too | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study explores enhancing grammatical error correction (GEC) through
artificial error generation (AEG) using language models (LMs). Specifically, we
fine-tune Llama 2-based LMs for error generation and find that this approach
yields synthetic errors akin to human errors. Next, we train GEC Llama models
with the help of these artificial errors and outperform previous
state-of-the-art error correction models, with gains ranging between 0.8 and 6
F0.5 points across all tested languages (German, Ukrainian, and Estonian).
Moreover, we demonstrate that generating errors by fine-tuning smaller
sequence-to-sequence models and prompting large commercial LMs (GPT-3.5 and
GPT-4) also results in synthetic errors beneficially affecting error generation
models.
| [
{
"created": "Fri, 8 Mar 2024 18:04:03 GMT",
"version": "v1"
}
] | 2024-03-11 | [
[
"Luhtaru",
"Agnes",
""
],
[
"Purason",
"Taido",
""
],
[
"Vainikko",
"Martin",
""
],
[
"Del",
"Maksym",
""
],
[
"Fishel",
"Mark",
""
]
] | This study explores enhancing grammatical error correction (GEC) through artificial error generation (AEG) using language models (LMs). Specifically, we fine-tune Llama 2-based LMs for error generation and find that this approach yields synthetic errors akin to human errors. Next, we train GEC Llama models with the help of these artificial errors and outperform previous state-of-the-art error correction models, with gains ranging between 0.8 and 6 F0.5 points across all tested languages (German, Ukrainian, and Estonian). Moreover, we demonstrate that generating errors by fine-tuning smaller sequence-to-sequence models and prompting large commercial LMs (GPT-3.5 and GPT-4) also results in synthetic errors beneficially affecting error generation models. |
1606.07056 | Abhay Prakash | Abhay Prakash, Chris Brockett, Puneet Agrawal | Emulating Human Conversations using Convolutional Neural Network-based
IR | 5 pages, Neu-IR'16 SIGIR Workshop on Neural Information Retrieval,
July 21, 2016, Pisa, Italy | null | null | null | cs.AI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational agents ("bots") are beginning to be widely used in
conversational interfaces. To design a system that is capable of emulating
human-like interactions, a conversational layer that can serve as a fabric for
chat-like interaction with the agent is needed. In this paper, we introduce a
model that employs Information Retrieval by utilizing convolutional deep
structured semantic neural network-based features in the ranker to present
human-like responses in ongoing conversation with a user. In conversations,
accounting for context is critical to the retrieval model; we show that our
context-sensitive approach using a Convolutional Deep Structured Semantic Model
(cDSSM) with character trigrams significantly outperforms several conventional
baselines in terms of the relevance of responses retrieved.
| [
{
"created": "Wed, 22 Jun 2016 19:55:24 GMT",
"version": "v1"
}
] | 2016-06-23 | [
[
"Prakash",
"Abhay",
""
],
[
"Brockett",
"Chris",
""
],
[
"Agrawal",
"Puneet",
""
]
] | Conversational agents ("bots") are beginning to be widely used in conversational interfaces. To design a system that is capable of emulating human-like interactions, a conversational layer that can serve as a fabric for chat-like interaction with the agent is needed. In this paper, we introduce a model that employs Information Retrieval by utilizing convolutional deep structured semantic neural network-based features in the ranker to present human-like responses in ongoing conversation with a user. In conversations, accounting for context is critical to the retrieval model; we show that our context-sensitive approach using a Convolutional Deep Structured Semantic Model (cDSSM) with character trigrams significantly outperforms several conventional baselines in terms of the relevance of responses retrieved. |
2406.17659 | Xiaohan Zhang | Xiaohan Zhang, Zainab Altaweel, Yohei Hayamizu, Yan Ding, Saeid Amiri,
Hao Yang, Andy Kaminski, Chad Esselink, Shiqi Zhang | DKPROMPT: Domain Knowledge Prompting Vision-Language Models for
Open-World Planning | null | null | null | null | cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Vision-language models (VLMs) have been applied to robot task planning
problems, where the robot receives a task in natural language and generates
plans based on visual inputs. While current VLMs have demonstrated strong
vision-language understanding capabilities, their performance is still far from
being satisfactory in planning tasks. At the same time, although classical task
planners, such as PDDL-based, are strong in planning for long-horizon tasks,
they do not work well in open worlds where unforeseen situations are common. In
this paper, we propose a novel task planning and execution framework, called
DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for
classical planning in open worlds. Results from quantitative experiments show
that DKPROMPT outperforms classical planning, pure VLM-based and a few other
competitive baselines in task completion rate.
| [
{
"created": "Tue, 25 Jun 2024 15:49:47 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Zhang",
"Xiaohan",
""
],
[
"Altaweel",
"Zainab",
""
],
[
"Hayamizu",
"Yohei",
""
],
[
"Ding",
"Yan",
""
],
[
"Amiri",
"Saeid",
""
],
[
"Yang",
"Hao",
""
],
[
"Kaminski",
"Andy",
""
],
[
"Esselink",
"Chad",
""
],
[
"Zhang",
"Shiqi",
""
]
] | Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate. |
1112.2774 | Tina Eliassi-Rad | Mangesh Gupte and Tina Eliassi-Rad | Measuring Tie Strength in Implicit Social Networks | 10 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set of people and a set of events they attend, we address the problem
of measuring connectedness or tie strength between each pair of persons given
that attendance at mutual events gives an implicit social network between
people. We take an axiomatic approach to this problem. Starting from a list of
axioms that a measure of tie strength must satisfy, we characterize functions
that satisfy all the axioms and show that there is a range of measures that
satisfy this characterization. A measure of tie strength induces a ranking on
the edges (and on the set of neighbors for every person). We show that for
applications where the ranking, and not the absolute value of the tie strength,
is the important thing about the measure, the axioms are equivalent to a
natural partial order. Also, to settle on a particular measure, we must make a
non-obvious decision about extending this partial order to a total order, and
that this decision is best left to particular applications. We classify
measures found in prior literature according to the axioms that they satisfy.
In our experiments, we measure tie strength and the coverage of our axioms in
several datasets. Also, for each dataset, we bound the maximum Kendall's Tau
divergence (which measures the number of pairwise disagreements between two
lists) between all measures that satisfy the axioms using the partial order.
This informs us if particular datasets are well behaved where we do not have to
worry about which measure to choose, or we have to be careful about the exact
choice of measure we make.
| [
{
"created": "Tue, 13 Dec 2011 02:30:22 GMT",
"version": "v1"
}
] | 2011-12-14 | [
[
"Gupte",
"Mangesh",
""
],
[
"Eliassi-Rad",
"Tina",
""
]
] | Given a set of people and a set of events they attend, we address the problem of measuring connectedness or tie strength between each pair of persons given that attendance at mutual events gives an implicit social network between people. We take an axiomatic approach to this problem. Starting from a list of axioms that a measure of tie strength must satisfy, we characterize functions that satisfy all the axioms and show that there is a range of measures that satisfy this characterization. A measure of tie strength induces a ranking on the edges (and on the set of neighbors for every person). We show that for applications where the ranking, and not the absolute value of the tie strength, is the important thing about the measure, the axioms are equivalent to a natural partial order. Also, to settle on a particular measure, we must make a non-obvious decision about extending this partial order to a total order, and that this decision is best left to particular applications. We classify measures found in prior literature according to the axioms that they satisfy. In our experiments, we measure tie strength and the coverage of our axioms in several datasets. Also, for each dataset, we bound the maximum Kendall's Tau divergence (which measures the number of pairwise disagreements between two lists) between all measures that satisfy the axioms using the partial order. This informs us if particular datasets are well behaved where we do not have to worry about which measure to choose, or we have to be careful about the exact choice of measure we make. |
2107.12048 | Wei Liu | Wei Liu, Li Chen, and Wenyi Zhang | Decentralized Federated Learning: Balancing Communication and Computing
Costs | null | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized stochastic gradient descent (SGD) is a driving engine for
decentralized federated learning (DFL). The performance of decentralized SGD is
jointly influenced by inter-node communications and local updates. In this
paper, we propose a general DFL framework, which implements both multiple local
updates and multiple inter-node communications periodically, to strike a
balance between communication efficiency and model consensus. It can provide a
general decentralized SGD analytical framework. We establish strong convergence
guarantees for the proposed DFL algorithm without the assumption of convex
objectives. The convergence rate of DFL can be optimized to achieve the balance
of communication and computing costs under constrained resources. For improving
communication efficiency of DFL, compressed communication is further introduced
to the proposed DFL as a new scheme, named DFL with compressed communication
(C-DFL). The proposed C-DFL exhibits linear convergence for strongly convex
objectives. Experiment results based on MNIST and CIFAR-10 datasets illustrate
the superiority of DFL over traditional decentralized SGD methods and show that
C-DFL further enhances communication efficiency.
| [
{
"created": "Mon, 26 Jul 2021 09:09:45 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Aug 2021 03:57:17 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jan 2022 05:10:09 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Feb 2022 04:15:35 GMT",
"version": "v4"
}
] | 2022-02-14 | [
[
"Liu",
"Wei",
""
],
[
"Chen",
"Li",
""
],
[
"Zhang",
"Wenyi",
""
]
] | Decentralized stochastic gradient descent (SGD) is a driving engine for decentralized federated learning (DFL). The performance of decentralized SGD is jointly influenced by inter-node communications and local updates. In this paper, we propose a general DFL framework, which implements both multiple local updates and multiple inter-node communications periodically, to strike a balance between communication efficiency and model consensus. It can provide a general decentralized SGD analytical framework. We establish strong convergence guarantees for the proposed DFL algorithm without the assumption of convex objectives. The convergence rate of DFL can be optimized to achieve the balance of communication and computing costs under constrained resources. For improving communication efficiency of DFL, compressed communication is further introduced to the proposed DFL as a new scheme, named DFL with compressed communication (C-DFL). The proposed C-DFL exhibits linear convergence for strongly convex objectives. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods and show that C-DFL further enhances communication efficiency. |
2205.13804 | Moritz Reuss | Moritz Reuss, Niels van Duijkeren, Robert Krug, Philipp Becker,
Vaisakh Shaj and Gerhard Neumann | End-to-End Learning of Hybrid Inverse Dynamics Models for Precise and
Compliant Impedance Control | Accepted for publication at Robotics: Science and System XVIII (RSS),
year 2022. Paper length is 13 pages (i.e. 9 pages of technical content, 1
page of the Bibliography/References and 3 pages of Appendix) | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | It is well-known that inverse dynamics models can improve tracking
performance in robot control. These models need to precisely capture the robot
dynamics, which consist of well-understood components, e.g., rigid body
dynamics, and effects that remain challenging to capture, e.g., stick-slip
friction and mechanical flexibilities. Such effects exhibit hysteresis and
partial observability, rendering them, particularly challenging to model.
Hence, hybrid models, which combine a physical prior with data-driven
approaches are especially well-suited in this setting. We present a novel
hybrid model formulation that enables us to identify fully physically
consistent inertial parameters of a rigid body dynamics model which is paired
with a recurrent neural network architecture, allowing us to capture unmodeled
partially observable effects using the network memory. We compare our approach
against state-of-the-art inverse dynamics models on a 7 degree of freedom
manipulator. Using data sets obtained through an optimal experiment design
approach, we study the accuracy of offline torque prediction and generalization
capabilities of joint learning methods. In control experiments on the real
system, we evaluate the model as a feed-forward term for impedance control and
show the feedback gains can be drastically reduced to achieve a given tracking
accuracy.
| [
{
"created": "Fri, 27 May 2022 07:39:28 GMT",
"version": "v1"
}
] | 2022-05-30 | [
[
"Reuss",
"Moritz",
""
],
[
"van Duijkeren",
"Niels",
""
],
[
"Krug",
"Robert",
""
],
[
"Becker",
"Philipp",
""
],
[
"Shaj",
"Vaisakh",
""
],
[
"Neumann",
"Gerhard",
""
]
] | It is well-known that inverse dynamics models can improve tracking performance in robot control. These models need to precisely capture the robot dynamics, which consist of well-understood components, e.g., rigid body dynamics, and effects that remain challenging to capture, e.g., stick-slip friction and mechanical flexibilities. Such effects exhibit hysteresis and partial observability, rendering them, particularly challenging to model. Hence, hybrid models, which combine a physical prior with data-driven approaches are especially well-suited in this setting. We present a novel hybrid model formulation that enables us to identify fully physically consistent inertial parameters of a rigid body dynamics model which is paired with a recurrent neural network architecture, allowing us to capture unmodeled partially observable effects using the network memory. We compare our approach against state-of-the-art inverse dynamics models on a 7 degree of freedom manipulator. Using data sets obtained through an optimal experiment design approach, we study the accuracy of offline torque prediction and generalization capabilities of joint learning methods. In control experiments on the real system, we evaluate the model as a feed-forward term for impedance control and show the feedback gains can be drastically reduced to achieve a given tracking accuracy. |
2406.11555 | Lukas Vierling | Lukas Vierling, Jie Fu, Kai Chen | Input Conditioned Graph Generation for Language Agents | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent progress in Large Language Models (LLMs) and language agents has
demonstrated significant promise for various future applications across
multiple disciplines. While traditional approaches to language agents often
rely on fixed, handcrafted designs, our research aims to develop both learnable
and dynamic agents. Our method uses an existing framework that abstracts
language agents as graphs. Within this graph framework, we aim to learn a model
that can generate edges for every given input to the language agent. This
allows us to generate edges that represent the flow of communication within the
graph based on the given input, thereby adjusting the internal communication of
a language agent. We learn to generate these edges using a pretrained LLM that
is fine-tuned with reinforcement learning. This LLM can be fine-tuned on
several datasets simultaneously, and we hypothesize that the model learns to
adapt to these different domains during training, achieving good overall
performance when encountering data from different domains during deployment. We
demonstrate that our approach surpasses the previous static approach by nearly
6% accuracy on a combined dataset of MMLU and CMMLU, and by more than 10% when
trained with a sparsity-inducing loss. It also performs superior in additional
experiments conducted with the MMLU and Mini Crossword Puzzles datasets. The
code is available at https://github.com/lukasVierling/DynamicGPTSwarm.
| [
{
"created": "Mon, 17 Jun 2024 13:53:15 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Vierling",
"Lukas",
""
],
[
"Fu",
"Jie",
""
],
[
"Chen",
"Kai",
""
]
] | Recent progress in Large Language Models (LLMs) and language agents has demonstrated significant promise for various future applications across multiple disciplines. While traditional approaches to language agents often rely on fixed, handcrafted designs, our research aims to develop both learnable and dynamic agents. Our method uses an existing framework that abstracts language agents as graphs. Within this graph framework, we aim to learn a model that can generate edges for every given input to the language agent. This allows us to generate edges that represent the flow of communication within the graph based on the given input, thereby adjusting the internal communication of a language agent. We learn to generate these edges using a pretrained LLM that is fine-tuned with reinforcement learning. This LLM can be fine-tuned on several datasets simultaneously, and we hypothesize that the model learns to adapt to these different domains during training, achieving good overall performance when encountering data from different domains during deployment. We demonstrate that our approach surpasses the previous static approach by nearly 6% accuracy on a combined dataset of MMLU and CMMLU, and by more than 10% when trained with a sparsity-inducing loss. It also performs superior in additional experiments conducted with the MMLU and Mini Crossword Puzzles datasets. The code is available at https://github.com/lukasVierling/DynamicGPTSwarm. |
2403.01238 | Kaituo Feng | Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang | On the Road to Portability: Compressing End-to-End Motion Planner for
Autonomous Driving | Accepted by CVPR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end motion planning models equipped with deep neural networks have
shown great potential for enabling full autonomous driving. However, the
oversized neural networks render them impractical for deployment on
resource-constrained systems, which unavoidably requires more computational
time and resources during reference.To handle this, knowledge distillation
offers a promising approach that compresses models by enabling a smaller
student model to learn from a larger teacher model. Nevertheless, how to apply
knowledge distillation to compress motion planners has not been explored so
far. In this paper, we propose PlanKD, the first knowledge distillation
framework tailored for compressing end-to-end motion planners. First,
considering that driving scenes are inherently complex, often containing
planning-irrelevant or even noisy information, transferring such information is
not beneficial for the student planner. Thus, we design an information
bottleneck based strategy to only distill planning-relevant information, rather
than transfer all information indiscriminately. Second, different waypoints in
an output planned trajectory may hold varying degrees of importance for motion
planning, where a slight deviation in certain crucial waypoints might lead to a
collision. Therefore, we devise a safety-aware waypoint-attentive distillation
module that assigns adaptive weights to different waypoints based on the
importance, to encourage the student to accurately mimic more crucial
waypoints, thereby improving overall safety. Experiments demonstrate that our
PlanKD can boost the performance of smaller planners by a large margin, and
significantly reduce their reference time.
| [
{
"created": "Sat, 2 Mar 2024 15:47:42 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 07:12:20 GMT",
"version": "v2"
}
] | 2024-04-16 | [
[
"Feng",
"Kaituo",
""
],
[
"Li",
"Changsheng",
""
],
[
"Ren",
"Dongchun",
""
],
[
"Yuan",
"Ye",
""
],
[
"Wang",
"Guoren",
""
]
] | End-to-end motion planning models equipped with deep neural networks have shown great potential for enabling full autonomous driving. However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference.To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model. Nevertheless, how to apply knowledge distillation to compress motion planners has not been explored so far. In this paper, we propose PlanKD, the first knowledge distillation framework tailored for compressing end-to-end motion planners. First, considering that driving scenes are inherently complex, often containing planning-irrelevant or even noisy information, transferring such information is not beneficial for the student planner. Thus, we design an information bottleneck based strategy to only distill planning-relevant information, rather than transfer all information indiscriminately. Second, different waypoints in an output planned trajectory may hold varying degrees of importance for motion planning, where a slight deviation in certain crucial waypoints might lead to a collision. Therefore, we devise a safety-aware waypoint-attentive distillation module that assigns adaptive weights to different waypoints based on the importance, to encourage the student to accurately mimic more crucial waypoints, thereby improving overall safety. Experiments demonstrate that our PlanKD can boost the performance of smaller planners by a large margin, and significantly reduce their reference time. |
1908.03999 | Jason Teutsch | Jason Teutsch, Michael Straka, Dan Boneh | Retrofitting a two-way peg between blockchains | null | null | null | null | cs.CR cs.LO econ.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In December 2015, a bounty emerged to establish both reliable communication
and secure transfer of value between the Dogecoin and Ethereum blockchains.
This prized "Dogethereum bridge" would allow parties to "lock" a DOGE coin on
Dogecoin and in exchange receive a newly minted WOW token in Ethereum. Any
subsequent owner of the WOW token could burn it and, in exchange, earn the
right to "unlock" a DOGE on Dogecoin.
We describe an efficient, trustless, and retrofitting Dogethereum
construction which requires no fork but rather employs economic collateral to
achieve a "lock" operation in Dogecoin. The protocol relies on bulletproofs,
Truebit, and parametrized tokens to efficiently and trustlessly relay events
from the "true" Dogecoin blockchain into Ethereum. The present construction not
only enables cross-platform exchange but also allows Ethereum smart contracts
to trustlessly access Dogecoin. A similar technique adds Ethereum-based smart
contracts to Bitcoin and Bitcoin data to Ethereum smart contracts.
| [
{
"created": "Mon, 12 Aug 2019 04:41:13 GMT",
"version": "v1"
}
] | 2019-08-13 | [
[
"Teutsch",
"Jason",
""
],
[
"Straka",
"Michael",
""
],
[
"Boneh",
"Dan",
""
]
] | In December 2015, a bounty emerged to establish both reliable communication and secure transfer of value between the Dogecoin and Ethereum blockchains. This prized "Dogethereum bridge" would allow parties to "lock" a DOGE coin on Dogecoin and in exchange receive a newly minted WOW token in Ethereum. Any subsequent owner of the WOW token could burn it and, in exchange, earn the right to "unlock" a DOGE on Dogecoin. We describe an efficient, trustless, and retrofitting Dogethereum construction which requires no fork but rather employs economic collateral to achieve a "lock" operation in Dogecoin. The protocol relies on bulletproofs, Truebit, and parametrized tokens to efficiently and trustlessly relay events from the "true" Dogecoin blockchain into Ethereum. The present construction not only enables cross-platform exchange but also allows Ethereum smart contracts to trustlessly access Dogecoin. A similar technique adds Ethereum-based smart contracts to Bitcoin and Bitcoin data to Ethereum smart contracts. |
1702.06028 | Andrea Cerone | Andrea Cerone, Alexey Gotsman, Hongseok Yang | Algebraic Laws for Weak Consistency (Extended Version) | Extended Version of the CONCUR'17 paper | null | 10.4230/LIPIcs.CONCUR.2017.22 | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern distributed systems often rely on so called weakly-consistent
databases, which achieve scalability by sacrificing the consistency guarantee
of distributed transaction processing. Such databases have been formalised in
two different styles, one based on abstract executions and the other based on
dependency graphs. The choice between these styles has been made according to
intended applications: the former has been used to specify and verify the
implementation of these databases, and the latter to prove properties of
programs running on top of the databases. In this paper, we present a set of
novel algebraic laws (i.e. inequations) that connect these two styles of
specifications; the laws relate binary relations used in a specification based
on abstract executions, to those used in a specification based on dependency
graphs. We then show that this algebraic connection gives rise to so called
robustness criteria, conditions which ensures that a program running on top of
a weakly-consistent database does not exhibit anomalous behaviours due to this
weak consistency. These criteria make it easy to reason about programs running
on top of these databases, and may become a basis for dynamic or static program
analyses. For a certain class of consistency models specifications, we prove a
full abstraction result that connects the two styles of specifications.
| [
{
"created": "Mon, 20 Feb 2017 15:55:20 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2017 00:11:58 GMT",
"version": "v2"
},
{
"created": "Thu, 4 May 2017 18:36:07 GMT",
"version": "v3"
},
{
"created": "Tue, 1 Aug 2017 15:47:01 GMT",
"version": "v4"
}
] | 2017-08-02 | [
[
"Cerone",
"Andrea",
""
],
[
"Gotsman",
"Alexey",
""
],
[
"Yang",
"Hongseok",
""
]
] | Modern distributed systems often rely on so called weakly-consistent databases, which achieve scalability by sacrificing the consistency guarantee of distributed transaction processing. Such databases have been formalised in two different styles, one based on abstract executions and the other based on dependency graphs. The choice between these styles has been made according to intended applications: the former has been used to specify and verify the implementation of these databases, and the latter to prove properties of programs running on top of the databases. In this paper, we present a set of novel algebraic laws (i.e. inequations) that connect these two styles of specifications; the laws relate binary relations used in a specification based on abstract executions, to those used in a specification based on dependency graphs. We then show that this algebraic connection gives rise to so called robustness criteria, conditions which ensures that a program running on top of a weakly-consistent database does not exhibit anomalous behaviours due to this weak consistency. These criteria make it easy to reason about programs running on top of these databases, and may become a basis for dynamic or static program analyses. For a certain class of consistency models specifications, we prove a full abstraction result that connects the two styles of specifications. |
1207.2860 | Shafqat Shad Mr | Shafqat Ali Shad, Enhong Chen, Faisal Malik Faisal Azeem | Enterprise Resource Planning - Real blessing or a Blessing in Disguise :
An Exploration of the Contextual Factors in Public Sector | null | Interdisciplinary Journal of Contemporary Research in Business,
vol. 2, no. 10, pp. 294-307, 2011 | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information systems have always been in a prime focus in organizations in
both local (Pakistani) and global environment. Now the race of being the best
through Information Systems has created its importance in public sector
organizations to meet the global challenges. Public sector organizations have
been facing problems in different segments of technology adoption especially in
ERP projects. ERP adoption/implementation projects in public sector
organizations still encounter major setbacks in terms of partly/completely
success/failure. Cultural and other social barriers have been resistant in
technology adoption in Pakistan. Now in the case of big ERP adoptions the
contextual factors must be identified and addressed. The paper investigates the
reasons of success or failure by addressing nature of complexities regarding
different contextual factors. The study includes a sample of Pakistan s four
public sector organizations. The sample of this four organizations includes two
organizations (Type-A) i.e. Oil & Gas Development Company Limited (OGDCL) and
National Database Registration Authority (NADRA) where ERP has been
successfully implemented and other two (Type-B) i.e. Pakistan Telecommunication
Corporation Limited (PTCL), Higher Education Commission (HEC) where ERP
implementation is in progress. The findings address the contextual factors i.e.
cultural, environmental & political changes which have a variable impact on ERP
systems adoption/implementation in addition to Business Process Re-engineering
(BPR). Paper also briefly includes analysis of gaps between pre & post ERP
implementation scenarios.
| [
{
"created": "Thu, 12 Jul 2012 07:24:45 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Sep 2012 02:06:21 GMT",
"version": "v2"
}
] | 2012-09-24 | [
[
"Shad",
"Shafqat Ali",
""
],
[
"Chen",
"Enhong",
""
],
[
"Azeem",
"Faisal Malik Faisal",
""
]
] | Information systems have always been in a prime focus in organizations in both local (Pakistani) and global environment. Now the race of being the best through Information Systems has created its importance in public sector organizations to meet the global challenges. Public sector organizations have been facing problems in different segments of technology adoption especially in ERP projects. ERP adoption/implementation projects in public sector organizations still encounter major setbacks in terms of partly/completely success/failure. Cultural and other social barriers have been resistant in technology adoption in Pakistan. Now in the case of big ERP adoptions the contextual factors must be identified and addressed. The paper investigates the reasons of success or failure by addressing nature of complexities regarding different contextual factors. The study includes a sample of Pakistan s four public sector organizations. The sample of this four organizations includes two organizations (Type-A) i.e. Oil & Gas Development Company Limited (OGDCL) and National Database Registration Authority (NADRA) where ERP has been successfully implemented and other two (Type-B) i.e. Pakistan Telecommunication Corporation Limited (PTCL), Higher Education Commission (HEC) where ERP implementation is in progress. The findings address the contextual factors i.e. cultural, environmental & political changes which have a variable impact on ERP systems adoption/implementation in addition to Business Process Re-engineering (BPR). Paper also briefly includes analysis of gaps between pre & post ERP implementation scenarios. |
1508.02086 | Hassan Kingravi | Hassan A. Kingravi, Harshal Maske, Girish Chowdhary | Kernel Controllers: A Systems-Theoretic Approach for Data-Driven
Modeling and Control of Spatiotemporally Evolving Processes | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of modeling, estimating, and controlling the latent
state of a spatiotemporally evolving continuous function using very few sensor
measurements and actuator locations. Our solution to the problem consists of
two parts: a predictive model of functional evolution, and feedback based
estimator and controllers that can robustly recover the state of the model and
drive it to a desired function. We show that layering a dynamical systems prior
over temporal evolution of weights of a kernel model is a valid approach to
spatiotemporal modeling that leads to systems theoretic, control-usable,
predictive models. We provide sufficient conditions on the number of sensors
and actuators required to guarantee observability and controllability. The
approach is validated on a large real dataset, and in simulation for the
control of spatiotemporally evolving function.
| [
{
"created": "Sun, 9 Aug 2015 21:26:55 GMT",
"version": "v1"
}
] | 2015-08-11 | [
[
"Kingravi",
"Hassan A.",
""
],
[
"Maske",
"Harshal",
""
],
[
"Chowdhary",
"Girish",
""
]
] | We consider the problem of modeling, estimating, and controlling the latent state of a spatiotemporally evolving continuous function using very few sensor measurements and actuator locations. Our solution to the problem consists of two parts: a predictive model of functional evolution, and feedback based estimator and controllers that can robustly recover the state of the model and drive it to a desired function. We show that layering a dynamical systems prior over temporal evolution of weights of a kernel model is a valid approach to spatiotemporal modeling that leads to systems theoretic, control-usable, predictive models. We provide sufficient conditions on the number of sensors and actuators required to guarantee observability and controllability. The approach is validated on a large real dataset, and in simulation for the control of spatiotemporally evolving function. |
1804.07379 | Garegin Grigoryan | Garegin Grigoryan, Yaoqing Liu | Toward a Programmable FIB Caching Architecture | null | Network Protocols (ICNP), 2017 IEEE 25th International Conference
on, 1-2 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current Internet routing ecosystem is neither sustainable nor economical.
More than 711K IPv4 routes and more than 41K IPv6 routes exist in current
global Forwarding Information Base (FIBs) with growth rates increasing. This
rapid growth has serious consequences, such as creating the need for costly FIB
memory upgrades and increased potential for Internet service outages. And while
FIB memories are power-hungry and prohibitively expensive, more than 70\% of
the routes in FIBs carry no traffic for long time periods, a wasteful use of
these expensive resources. Taking advantage of the emerging concept of
programmable data plane, we design a programmable FIB caching architecture to
address the existing concerns. Our preliminary evaluation results show that the
architecture can significantly mitigate the global routing scalability and poor
FIB utilization issues.
| [
{
"created": "Thu, 19 Apr 2018 21:10:17 GMT",
"version": "v1"
}
] | 2018-04-23 | [
[
"Grigoryan",
"Garegin",
""
],
[
"Liu",
"Yaoqing",
""
]
] | The current Internet routing ecosystem is neither sustainable nor economical. More than 711K IPv4 routes and more than 41K IPv6 routes exist in current global Forwarding Information Base (FIBs) with growth rates increasing. This rapid growth has serious consequences, such as creating the need for costly FIB memory upgrades and increased potential for Internet service outages. And while FIB memories are power-hungry and prohibitively expensive, more than 70\% of the routes in FIBs carry no traffic for long time periods, a wasteful use of these expensive resources. Taking advantage of the emerging concept of programmable data plane, we design a programmable FIB caching architecture to address the existing concerns. Our preliminary evaluation results show that the architecture can significantly mitigate the global routing scalability and poor FIB utilization issues. |
1206.5247 | Daniel Eaton | Daniel Eaton, Kevin Murphy | Bayesian structure learning using dynamic programming and MCMC | Appears in Proceedings of the Twenty-Third Conference on Uncertainty
in Artificial Intelligence (UAI2007) | null | null | UAI-P-2007-PG-101-108 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MCMC methods for sampling from the space of DAGs can mix poorly due to the
local nature of the proposals that are commonly used. It has been shown that
sampling from the space of node orders yields better results [FK03, EW06].
Recently, Koivisto and Sood showed how one can analytically marginalize over
orders using dynamic programming (DP) [KS04, Koi06]. Their method computes the
exact marginal posterior edge probabilities, thus avoiding the need for MCMC.
Unfortunately, there are four drawbacks to the DP technique: it can only use
modular priors, it can only compute posteriors over modular features, it is
difficult to compute a predictive density, and it takes exponential time and
space. We show how to overcome the first three of these problems by using the
DP algorithm as a proposal distribution for MCMC in DAG space. We show that
this hybrid technique converges to the posterior faster than other methods,
resulting in more accurate structure learning and higher predictive likelihoods
on test data.
| [
{
"created": "Wed, 20 Jun 2012 14:54:43 GMT",
"version": "v1"
}
] | 2012-06-26 | [
[
"Eaton",
"Daniel",
""
],
[
"Murphy",
"Kevin",
""
]
] | MCMC methods for sampling from the space of DAGs can mix poorly due to the local nature of the proposals that are commonly used. It has been shown that sampling from the space of node orders yields better results [FK03, EW06]. Recently, Koivisto and Sood showed how one can analytically marginalize over orders using dynamic programming (DP) [KS04, Koi06]. Their method computes the exact marginal posterior edge probabilities, thus avoiding the need for MCMC. Unfortunately, there are four drawbacks to the DP technique: it can only use modular priors, it can only compute posteriors over modular features, it is difficult to compute a predictive density, and it takes exponential time and space. We show how to overcome the first three of these problems by using the DP algorithm as a proposal distribution for MCMC in DAG space. We show that this hybrid technique converges to the posterior faster than other methods, resulting in more accurate structure learning and higher predictive likelihoods on test data. |
1004.3580 | Loet Leydesdorff | Loet Leydesdorff, Tobias Opthof | Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal
Impact Factor based on Fractional Counting of Citations | null | null | null | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Impact factors (and similar measures such as the Scimago Journal Rankings)
suffer from two problems: (i) citation behavior varies among fields of science
and therefore leads to systematic differences, and (ii) there are no statistics
to inform us whether differences are significant. The recently introduced SNIP
indicator of Scopus tries to remedy the first of these two problems, but a
number of normalization decisions are involved which makes it impossible to
test for significance. Using fractional counting of citations-based on the
assumption that impact is proportionate to the number of references in the
citing documents-citations can be contextualized at the paper level and
aggregated impacts of sets can be tested for their significance. It can be
shown that the weighted impact of Annals of Mathematics (0.247) is not so much
lower than that of Molecular Cell (0.386) despite a five-fold difference
between their impact factors (2.793 and 13.156, respectively).
| [
{
"created": "Tue, 20 Apr 2010 21:17:52 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Apr 2010 07:08:46 GMT",
"version": "v2"
}
] | 2010-04-27 | [
[
"Leydesdorff",
"Loet",
""
],
[
"Opthof",
"Tobias",
""
]
] | Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (i) citation behavior varies among fields of science and therefore leads to systematic differences, and (ii) there are no statistics to inform us whether differences are significant. The recently introduced SNIP indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-fold difference between their impact factors (2.793 and 13.156, respectively). |
1710.09177 | Stefan M. Moser | Stefan M. Moser, Ligong Wang, Mich\`ele Wigger | Capacity Results on Multiple-Input Single-Output Wireless Optical
Channels | Submitted to IEEE Transactions on Information Theory | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper derives upper and lower bounds on the capacity of the
multiple-input single-output free-space optical intensity channel with
signal-independent additive Gaussian noise subject to both an average-intensity
and a peak-intensity constraint. In the limit where the signal-to-noise ratio
(SNR) tends to infinity, the asymptotic capacity is specified, while in the
limit where the SNR tends to zero, the exact slope of the capacity is also
given.
| [
{
"created": "Wed, 25 Oct 2017 11:36:08 GMT",
"version": "v1"
}
] | 2017-10-26 | [
[
"Moser",
"Stefan M.",
""
],
[
"Wang",
"Ligong",
""
],
[
"Wigger",
"Michèle",
""
]
] | This paper derives upper and lower bounds on the capacity of the multiple-input single-output free-space optical intensity channel with signal-independent additive Gaussian noise subject to both an average-intensity and a peak-intensity constraint. In the limit where the signal-to-noise ratio (SNR) tends to infinity, the asymptotic capacity is specified, while in the limit where the SNR tends to zero, the exact slope of the capacity is also given. |
2404.18708 | Andy L\"ucking | Andy L\"ucking, Alexander Henlein, Alexander Mehler | Iconic Gesture Semantics | 39 pages, 28 figures, under revision | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The "meaning" of an iconic gesture is conditioned on its informational
evaluation. Only informational evaluation lifts a gesture to a quasi-linguistic
level that can interact with verbal content. Interaction is either vacuous or
regimented by usual lexicon-driven inferences. Informational evaluation is
spelled out as extended exemplification (extemplification) in terms of
perceptual classification of a gesture's visual iconic model. The iconic model
is derived from Frege/Montague-like truth-functional evaluation of a gesture's
form within spatially extended domains. We further argue that the perceptual
classification of instances of visual communication requires a notion of
meaning different from Frege/Montague frameworks. Therefore, a heuristic for
gesture interpretation is provided that can guide the working semanticist. In
sum, an iconic gesture semantics is introduced which covers the full range from
kinematic gesture representations over model-theoretic evaluation to
inferential interpretation in dynamic semantic frameworks.
| [
{
"created": "Mon, 29 Apr 2024 13:58:03 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Lücking",
"Andy",
""
],
[
"Henlein",
"Alexander",
""
],
[
"Mehler",
"Alexander",
""
]
] | The "meaning" of an iconic gesture is conditioned on its informational evaluation. Only informational evaluation lifts a gesture to a quasi-linguistic level that can interact with verbal content. Interaction is either vacuous or regimented by usual lexicon-driven inferences. Informational evaluation is spelled out as extended exemplification (extemplification) in terms of perceptual classification of a gesture's visual iconic model. The iconic model is derived from Frege/Montague-like truth-functional evaluation of a gesture's form within spatially extended domains. We further argue that the perceptual classification of instances of visual communication requires a notion of meaning different from Frege/Montague frameworks. Therefore, a heuristic for gesture interpretation is provided that can guide the working semanticist. In sum, an iconic gesture semantics is introduced which covers the full range from kinematic gesture representations over model-theoretic evaluation to inferential interpretation in dynamic semantic frameworks. |
0905.0283 | Kevin Wortman | David Eppstein and Kevin A. Wortman | Optimal Embedding Into Star Metrics | 12 pages, 3 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an O(n^3 log^2 n)-time algorithm for the following problem: given
a finite metric space X, create a star-topology network with the points of X as
its leaves, such that the distances in the star are at least as large as in X,
with minimum dilation. As part of our algorithm, we solve in the same time
bound the parametric negative cycle detection problem: given a directed graph
with edge weights that are increasing linear functions of a parameter lambda,
find the smallest value of lambda such that the graph contains no
negative-weight cycles.
| [
{
"created": "Sun, 3 May 2009 19:21:52 GMT",
"version": "v1"
}
] | 2009-05-05 | [
[
"Eppstein",
"David",
""
],
[
"Wortman",
"Kevin A.",
""
]
] | We present an O(n^3 log^2 n)-time algorithm for the following problem: given a finite metric space X, create a star-topology network with the points of X as its leaves, such that the distances in the star are at least as large as in X, with minimum dilation. As part of our algorithm, we solve in the same time bound the parametric negative cycle detection problem: given a directed graph with edge weights that are increasing linear functions of a parameter lambda, find the smallest value of lambda such that the graph contains no negative-weight cycles. |
1203.3923 | Muhammad Anshari Mr | Mohammad Nabil Almunawar and Muhammad Anshari | Health Information Systems (HIS): Concept and Technology | International Conference Informatics Development, 2011 | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A health information system (HIS) is the intersection of between healthcare's
business process, and information systems to deliver better healthcare
services. The nature of healthcare industry, which is highly influenced by
economic, social, politic, and technological factors, has changed over time.
This paper will address some important concepts of healthcare and related
terminologies to provide a holistic view for HIS. Related technological
milestones and major events are briefly summarized. The trends and rapid
development of health information technologies are also discussed.
| [
{
"created": "Sun, 18 Mar 2012 06:59:22 GMT",
"version": "v1"
}
] | 2012-03-20 | [
[
"Almunawar",
"Mohammad Nabil",
""
],
[
"Anshari",
"Muhammad",
""
]
] | A health information system (HIS) is the intersection of between healthcare's business process, and information systems to deliver better healthcare services. The nature of healthcare industry, which is highly influenced by economic, social, politic, and technological factors, has changed over time. This paper will address some important concepts of healthcare and related terminologies to provide a holistic view for HIS. Related technological milestones and major events are briefly summarized. The trends and rapid development of health information technologies are also discussed. |
1401.3449 | Vincent Conitzer | Vincent Conitzer | Eliciting Single-Peaked Preferences Using Comparison Queries | null | Journal Of Artificial Intelligence Research, Volume 35, pages
161-191, 2009 | 10.1613/jair.2606 | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Voting is a general method for aggregating the preferences of multiple
agents. Each agent ranks all the possible alternatives, and based on this, an
aggregate ranking of the alternatives (or at least a winning alternative) is
produced. However, when there are many alternatives, it is impractical to
simply ask agents to report their complete preferences. Rather, the agents
preferences, or at least the relevant parts thereof, need to be elicited. This
is done by asking the agents a (hopefully small) number of simple queries about
their preferences, such as comparison queries, which ask an agent to compare
two of the alternatives. Prior work on preference elicitation in voting has
focused on the case of unrestricted preferences. It has been shown that in this
setting, it is sometimes necessary to ask each agent (almost) as many queries
as would be required to determine an arbitrary ranking of the alternatives. In
contrast, in this paper, we focus on single-peaked preferences. We show that
such preferences can be elicited using only a linear number of comparison
queries, if either the order with respect to which preferences are
single-peaked is known, or at least one other agents complete preferences are
known. We show that using a sublinear number of queries does not suffice. We
also consider the case of cardinally single-peaked preferences. For this case,
we show that if the alternatives cardinal positions are known, then an agents
preferences can be elicited using only a logarithmic number of queries;
however, we also show that if the cardinal positions are not known, then a
sublinear number of queries does not suffice. We present experimental results
for all elicitation algorithms. We also consider the problem of only eliciting
enough information to determine the aggregate ranking, and show that even for
this more modest objective, a sublinear number of queries per agent does not
suffice for known ordinal or unknown cardinal positions. Finally, we discuss
whether and how these techniques can be applied when preferences are almost
single-peaked.
| [
{
"created": "Wed, 15 Jan 2014 05:10:11 GMT",
"version": "v1"
}
] | 2014-01-16 | [
[
"Conitzer",
"Vincent",
""
]
] | Voting is a general method for aggregating the preferences of multiple agents. Each agent ranks all the possible alternatives, and based on this, an aggregate ranking of the alternatives (or at least a winning alternative) is produced. However, when there are many alternatives, it is impractical to simply ask agents to report their complete preferences. Rather, the agents preferences, or at least the relevant parts thereof, need to be elicited. This is done by asking the agents a (hopefully small) number of simple queries about their preferences, such as comparison queries, which ask an agent to compare two of the alternatives. Prior work on preference elicitation in voting has focused on the case of unrestricted preferences. It has been shown that in this setting, it is sometimes necessary to ask each agent (almost) as many queries as would be required to determine an arbitrary ranking of the alternatives. In contrast, in this paper, we focus on single-peaked preferences. We show that such preferences can be elicited using only a linear number of comparison queries, if either the order with respect to which preferences are single-peaked is known, or at least one other agents complete preferences are known. We show that using a sublinear number of queries does not suffice. We also consider the case of cardinally single-peaked preferences. For this case, we show that if the alternatives cardinal positions are known, then an agents preferences can be elicited using only a logarithmic number of queries; however, we also show that if the cardinal positions are not known, then a sublinear number of queries does not suffice. We present experimental results for all elicitation algorithms. We also consider the problem of only eliciting enough information to determine the aggregate ranking, and show that even for this more modest objective, a sublinear number of queries per agent does not suffice for known ordinal or unknown cardinal positions. Finally, we discuss whether and how these techniques can be applied when preferences are almost single-peaked. |
2211.11962 | Hai Wu | Hai Wu and Chenglu Wen and Wei Li and Xin Li and Ruigang Yang and
Cheng Wang | Transformation-Equivariant 3D Object Detection for Autonomous Driving | Accepted by AAAI 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D object detection received increasing attention in autonomous driving
recently. Objects in 3D scenes are distributed with diverse orientations.
Ordinary detectors do not explicitly model the variations of rotation and
reflection transformations. Consequently, large networks and extensive data
augmentation are required for robust detection. Recent equivariant networks
explicitly model the transformation variations by applying shared networks on
multiple transformed point clouds, showing great potential in object geometry
modeling. However, it is difficult to apply such networks to 3D object
detection in autonomous driving due to its large computation cost and slow
reasoning speed. In this work, we present TED, an efficient
Transformation-Equivariant 3D Detector to overcome the computation cost and
speed issues. TED first applies a sparse convolution backbone to extract
multi-channel transformation-equivariant voxel features; and then aligns and
aggregates these equivariant features into lightweight and compact
representations for high-performance 3D object detection. On the highly
competitive KITTI 3D car detection leaderboard, TED ranked 1st among all
submissions with competitive efficiency.
| [
{
"created": "Tue, 22 Nov 2022 02:51:56 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Nov 2022 01:51:39 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Dec 2022 08:00:16 GMT",
"version": "v3"
}
] | 2022-12-02 | [
[
"Wu",
"Hai",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Li",
"Wei",
""
],
[
"Li",
"Xin",
""
],
[
"Yang",
"Ruigang",
""
],
[
"Wang",
"Cheng",
""
]
] | 3D object detection received increasing attention in autonomous driving recently. Objects in 3D scenes are distributed with diverse orientations. Ordinary detectors do not explicitly model the variations of rotation and reflection transformations. Consequently, large networks and extensive data augmentation are required for robust detection. Recent equivariant networks explicitly model the transformation variations by applying shared networks on multiple transformed point clouds, showing great potential in object geometry modeling. However, it is difficult to apply such networks to 3D object detection in autonomous driving due to its large computation cost and slow reasoning speed. In this work, we present TED, an efficient Transformation-Equivariant 3D Detector to overcome the computation cost and speed issues. TED first applies a sparse convolution backbone to extract multi-channel transformation-equivariant voxel features; and then aligns and aggregates these equivariant features into lightweight and compact representations for high-performance 3D object detection. On the highly competitive KITTI 3D car detection leaderboard, TED ranked 1st among all submissions with competitive efficiency. |
2209.09543 | Peter Belcak | Peter Belc\'ak, Ard Kastrati, Flavio Schenker, Roger Wattenhofer | FACT: Learning Governing Abstractions Behind Integer Sequences | Accepted to the 36th Conference on Neural Information Processing
Systems (NeurIPS 2022) Track on Datasets and Benchmarks. 37 pages | null | null | null | cs.LG cs.AI cs.SC | http://creativecommons.org/licenses/by/4.0/ | Integer sequences are of central importance to the modeling of concepts
admitting complete finitary descriptions. We introduce a novel view on the
learning of such concepts and lay down a set of benchmarking tasks aimed at
conceptual understanding by machine learning models. These tasks indirectly
assess model ability to abstract, and challenge them to reason both
interpolatively and extrapolatively from the knowledge gained by observing
representative examples. To further aid research in knowledge representation
and reasoning, we present FACT, the Finitary Abstraction Comprehension Toolkit.
The toolkit surrounds a large dataset of integer sequences comprising both
organic and synthetic entries, a library for data pre-processing and
generation, a set of model performance evaluation tools, and a collection of
baseline model implementations, enabling the making of the future advancements
with ease.
| [
{
"created": "Tue, 20 Sep 2022 08:20:03 GMT",
"version": "v1"
}
] | 2022-09-21 | [
[
"Belcák",
"Peter",
""
],
[
"Kastrati",
"Ard",
""
],
[
"Schenker",
"Flavio",
""
],
[
"Wattenhofer",
"Roger",
""
]
] | Integer sequences are of central importance to the modeling of concepts admitting complete finitary descriptions. We introduce a novel view on the learning of such concepts and lay down a set of benchmarking tasks aimed at conceptual understanding by machine learning models. These tasks indirectly assess model ability to abstract, and challenge them to reason both interpolatively and extrapolatively from the knowledge gained by observing representative examples. To further aid research in knowledge representation and reasoning, we present FACT, the Finitary Abstraction Comprehension Toolkit. The toolkit surrounds a large dataset of integer sequences comprising both organic and synthetic entries, a library for data pre-processing and generation, a set of model performance evaluation tools, and a collection of baseline model implementations, enabling the making of the future advancements with ease. |
2312.12676 | Morteza Haghir Chehreghani | Jack Sandberg, Niklas {\AA}kerblom, Morteza Haghir Chehreghani | Combinatorial Gaussian Process Bandits in Bayesian Settings: Theory and
Application for Energy-Efficient Navigation | 39 pages, 10 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a combinatorial Gaussian process semi-bandit problem with
time-varying arm availability. Each round, an agent is provided a set of
available base arms and must select a subset of them to maximize the long-term
cumulative reward. Assuming the expected rewards are sampled from a Gaussian
process (GP) over the arm space, the agent can efficiently learn. We study the
Bayesian setting and provide novel Bayesian regret bounds for three GP-based
algorithms: GP-UCB, Bayes-GP-UCB and GP-TS. Our bounds extend previous results
for GP-UCB and GP-TS to a combinatorial setting with varying arm availability
and to the best of our knowledge, we provide the first Bayesian regret bound
for Bayes-GP-UCB. Time-varying arm availability encompasses other widely
considered bandit problems such as contextual bandits. We formulate the online
energy-efficient navigation problem as a combinatorial and contextual bandit
and provide a comprehensive experimental study on synthetic and real-world road
networks with detailed simulations. The contextual GP model obtains lower
regret and is less dependent on the informativeness of the prior compared to
the non-contextual Bayesian inference model. In addition, Thompson sampling
obtains lower regret than Bayes-UCB for both the contextual and non-contextual
model.
| [
{
"created": "Wed, 20 Dec 2023 00:31:43 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Sandberg",
"Jack",
""
],
[
"Åkerblom",
"Niklas",
""
],
[
"Chehreghani",
"Morteza Haghir",
""
]
] | We consider a combinatorial Gaussian process semi-bandit problem with time-varying arm availability. Each round, an agent is provided a set of available base arms and must select a subset of them to maximize the long-term cumulative reward. Assuming the expected rewards are sampled from a Gaussian process (GP) over the arm space, the agent can efficiently learn. We study the Bayesian setting and provide novel Bayesian regret bounds for three GP-based algorithms: GP-UCB, Bayes-GP-UCB and GP-TS. Our bounds extend previous results for GP-UCB and GP-TS to a combinatorial setting with varying arm availability and to the best of our knowledge, we provide the first Bayesian regret bound for Bayes-GP-UCB. Time-varying arm availability encompasses other widely considered bandit problems such as contextual bandits. We formulate the online energy-efficient navigation problem as a combinatorial and contextual bandit and provide a comprehensive experimental study on synthetic and real-world road networks with detailed simulations. The contextual GP model obtains lower regret and is less dependent on the informativeness of the prior compared to the non-contextual Bayesian inference model. In addition, Thompson sampling obtains lower regret than Bayes-UCB for both the contextual and non-contextual model. |
1007.5139 | Secretary Ijaia | Anuradha Banerjee (1) and Paramartha Dutta (2) ((1) Kalyani Govt.
Engg. College, India and (2) Visva-Bharati University, India) | Reputation-Based Attack-Resistant Cooperation Stimulation (RACS) For
Mobile Ad hoc Networks | 20 pages, 4 figures | International Journal of Artificial Intelligence & Applications
1.3 (2010) 71-90 | 10.5121/ijaia.2010.1306 | null | cs.NI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In mobile ad hoc networks (MANET), nodes usually belong to different
authorities and pursue different goals. In order to maximize their own
performance, nodes in such networks tend to be selfish and are not willing to
forward packets for benefit of others. Meanwhile, some nodes may behave
maliciously and try to disrupt the network through wasting other nodes
resources in a very large scale. In this article, we present a reputation-based
attack resistant cooperation stimulation (RACS) system which ensures that
damage caused by malicious nodes can be bounded and cooperation among the
selfish nodes can be enforced. Mathematical analyses of the system as well as
the simulation results have confirmed effectiveness of our proposed system.
RACS is completely self-organizing and distributed. It does not require any
tamper-proof hardware or central management policy.
| [
{
"created": "Thu, 29 Jul 2010 07:54:51 GMT",
"version": "v1"
}
] | 2010-07-30 | [
[
"Banerjee",
"Anuradha",
""
],
[
"Dutta",
"Paramartha",
""
]
] | In mobile ad hoc networks (MANET), nodes usually belong to different authorities and pursue different goals. In order to maximize their own performance, nodes in such networks tend to be selfish and are not willing to forward packets for benefit of others. Meanwhile, some nodes may behave maliciously and try to disrupt the network through wasting other nodes resources in a very large scale. In this article, we present a reputation-based attack resistant cooperation stimulation (RACS) system which ensures that damage caused by malicious nodes can be bounded and cooperation among the selfish nodes can be enforced. Mathematical analyses of the system as well as the simulation results have confirmed effectiveness of our proposed system. RACS is completely self-organizing and distributed. It does not require any tamper-proof hardware or central management policy. |
2407.08564 | Hengshu Zhu | Meng Hua, Yuan Cheng, Hengshu Zhu | The Career Interests of Large Language Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Large Language Models (LLMs) have significantly
extended their capabilities, evolving from basic text generation to complex,
human-like interactions. In light of the possibilities that LLMs could assume
significant workplace responsibilities, it becomes imminently necessary to
explore LLMs' capacities as professional assistants. This study focuses on the
aspect of career interests by applying the Occupation Network's Interest
Profiler short form to LLMs as if they were human participants and investigates
their hypothetical career interests and competence, examining how these vary
with language changes and model advancements. We analyzed the answers using a
general linear mixed model approach and found distinct career interest
inclinations among LLMs, particularly towards the social and artistic domains.
Interestingly, these preferences did not align with the occupations where LLMs
exhibited higher competence. This novel approach of using psychometric
instruments and sophisticated statistical tools on LLMs unveils fresh
perspectives on their integration into professional environments, highlighting
human-like tendencies and promoting a reevaluation of LLMs' self-perception and
competency alignment in the workforce.
| [
{
"created": "Thu, 11 Jul 2024 14:54:46 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Hua",
"Meng",
""
],
[
"Cheng",
"Yuan",
""
],
[
"Zhu",
"Hengshu",
""
]
] | Recent advancements in Large Language Models (LLMs) have significantly extended their capabilities, evolving from basic text generation to complex, human-like interactions. In light of the possibilities that LLMs could assume significant workplace responsibilities, it becomes imminently necessary to explore LLMs' capacities as professional assistants. This study focuses on the aspect of career interests by applying the Occupation Network's Interest Profiler short form to LLMs as if they were human participants and investigates their hypothetical career interests and competence, examining how these vary with language changes and model advancements. We analyzed the answers using a general linear mixed model approach and found distinct career interest inclinations among LLMs, particularly towards the social and artistic domains. Interestingly, these preferences did not align with the occupations where LLMs exhibited higher competence. This novel approach of using psychometric instruments and sophisticated statistical tools on LLMs unveils fresh perspectives on their integration into professional environments, highlighting human-like tendencies and promoting a reevaluation of LLMs' self-perception and competency alignment in the workforce. |
2204.11343 | Saeed Banaeian Far | Saeed Banaeian Far, Azadeh Imani Rad | Applying Digital Twins in Metaverse: User Interface, Security and
Privacy Challenges | This article has been accepted in "Journal of Metaverse". You can
cite as (APA): Banaeian Far, S. & Imani Rad, A. (2022). Applying Digital
Twins in Metaverse: User Interface, Security and Privacy Challenges. Journal
of Metaverse, 2 (1), 8-16. Retrieved from
https://dergipark.org.tr/en/pub/jmv/issue/67967/1072189 | 2022 | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Digital Twins (DTs) are a conventional and well-known concept, proposed in
70s, that are popular in a broad spectrum of sciences, industry innovations,
and consortium alliances. However, in the last few years, the growth of digital
assets and online communications has attracted attention to DTs as highly
accurate twins of physical objects. Metaverse, as a digital world, is a concept
proposed in 1992 and has also become a popular paradigm and hot topic in public
where DTs can play critical roles. This study first presents definitions,
applications, and general challenges of DT and Metaverse. It then offers a
three-layer architecture linking the physical world to the Metaverse through a
user interface. Further, it investigates the security and privacy challenges of
using DTs in Metaverse. Finally, a conclusion, including possible solutions for
mentioned challenges and future works, will be provided.
| [
{
"created": "Sun, 24 Apr 2022 19:41:05 GMT",
"version": "v1"
}
] | 2022-04-26 | [
[
"Far",
"Saeed Banaeian",
""
],
[
"Rad",
"Azadeh Imani",
""
]
] | Digital Twins (DTs) are a conventional and well-known concept, proposed in 70s, that are popular in a broad spectrum of sciences, industry innovations, and consortium alliances. However, in the last few years, the growth of digital assets and online communications has attracted attention to DTs as highly accurate twins of physical objects. Metaverse, as a digital world, is a concept proposed in 1992 and has also become a popular paradigm and hot topic in public where DTs can play critical roles. This study first presents definitions, applications, and general challenges of DT and Metaverse. It then offers a three-layer architecture linking the physical world to the Metaverse through a user interface. Further, it investigates the security and privacy challenges of using DTs in Metaverse. Finally, a conclusion, including possible solutions for mentioned challenges and future works, will be provided. |
1706.00977 | Vashist Avadhanula | Shipra Agrawal, Vashist Avadhanula, Vineet Goyal, Assaf Zeevi | Thompson Sampling for the MNL-Bandit | Accepted for presentation at Conference on Learning Theory (COLT)
2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a sequential subset selection problem under parameter
uncertainty, where at each time step, the decision maker selects a subset of
cardinality $K$ from $N$ possible items (arms), and observes a (bandit)
feedback in the form of the index of one of the items in said subset, or none.
Each item in the index set is ascribed a certain value (reward), and the
feedback is governed by a Multinomial Logit (MNL) choice model whose parameters
are a priori unknown. The objective of the decision maker is to maximize the
expected cumulative rewards over a finite horizon $T$, or alternatively,
minimize the regret relative to an oracle that knows the MNL parameters. We
refer to this as the MNL-Bandit problem. This problem is representative of a
larger family of exploration-exploitation problems that involve a combinatorial
objective, and arise in several important application domains. We present an
approach to adapt Thompson Sampling to this problem and show that it achieves
near-optimal regret as well as attractive numerical performance.
| [
{
"created": "Sat, 3 Jun 2017 16:48:34 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jun 2017 09:47:40 GMT",
"version": "v2"
},
{
"created": "Sat, 1 Jul 2017 17:36:16 GMT",
"version": "v3"
},
{
"created": "Sat, 27 Oct 2018 09:53:17 GMT",
"version": "v4"
},
{
"created": "Wed, 31 Oct 2018 06:57:46 GMT",
"version": "v5"
},
{
"created": "Wed, 19 Dec 2018 23:14:39 GMT",
"version": "v6"
},
{
"created": "Thu, 3 Jan 2019 19:45:01 GMT",
"version": "v7"
}
] | 2019-01-07 | [
[
"Agrawal",
"Shipra",
""
],
[
"Avadhanula",
"Vashist",
""
],
[
"Goyal",
"Vineet",
""
],
[
"Zeevi",
"Assaf",
""
]
] | We consider a sequential subset selection problem under parameter uncertainty, where at each time step, the decision maker selects a subset of cardinality $K$ from $N$ possible items (arms), and observes a (bandit) feedback in the form of the index of one of the items in said subset, or none. Each item in the index set is ascribed a certain value (reward), and the feedback is governed by a Multinomial Logit (MNL) choice model whose parameters are a priori unknown. The objective of the decision maker is to maximize the expected cumulative rewards over a finite horizon $T$, or alternatively, minimize the regret relative to an oracle that knows the MNL parameters. We refer to this as the MNL-Bandit problem. This problem is representative of a larger family of exploration-exploitation problems that involve a combinatorial objective, and arise in several important application domains. We present an approach to adapt Thompson Sampling to this problem and show that it achieves near-optimal regret as well as attractive numerical performance. |
2007.09834 | Ioannis Korkontzelos | Isa Inuwa-Dutse, Mark Liptrott and Ioannis Korkontzelos | Migration and Refugee Crisis: a Critical Analysis of Online Public
Perception | 15 pages, 8 figures | null | null | null | cs.SI cs.CY | http://creativecommons.org/licenses/by/4.0/ | The migration rate and the level of resentments towards migrants are an
important issue in modern civilisation. The infamous EU refugee crisis caught
many countries unprepared, leading to sporadic and rudimentary containment
measures that, in turn, led to significant public discourse. Decades of offline
data collected via traditional survey methods have been utilised earlier to
understand public opinion to foster peaceful coexistence. Capturing and
understanding online public opinion via social media is crucial towards a joint
strategic regulation spanning safety, rights of migrants and cordial
integration for economic prosperity. We present a analysis of opinions on
migrants and refugees expressed by the users of a very popular social platform,
Twitter. We analyse sentiment and the associated context of expressions in a
vast collection of tweets related to the EU refugee crisis. Our study reveals a
marginally higher proportion of negative sentiments vis-a-vis migrants and a
large proportion of the negative sentiments is more reflected among the
ordinary users. Users with many followers and non-governmental organisations
(NGO) tend to tweet favourably about the topic, offsetting the distribution of
negative sentiment. We opine that they can be encouraged to be more proactive
in neutralising negative attitudes that may arise concerning similar
incidences.
| [
{
"created": "Mon, 20 Jul 2020 02:04:01 GMT",
"version": "v1"
}
] | 2020-07-21 | [
[
"Inuwa-Dutse",
"Isa",
""
],
[
"Liptrott",
"Mark",
""
],
[
"Korkontzelos",
"Ioannis",
""
]
] | The migration rate and the level of resentments towards migrants are an important issue in modern civilisation. The infamous EU refugee crisis caught many countries unprepared, leading to sporadic and rudimentary containment measures that, in turn, led to significant public discourse. Decades of offline data collected via traditional survey methods have been utilised earlier to understand public opinion to foster peaceful coexistence. Capturing and understanding online public opinion via social media is crucial towards a joint strategic regulation spanning safety, rights of migrants and cordial integration for economic prosperity. We present a analysis of opinions on migrants and refugees expressed by the users of a very popular social platform, Twitter. We analyse sentiment and the associated context of expressions in a vast collection of tweets related to the EU refugee crisis. Our study reveals a marginally higher proportion of negative sentiments vis-a-vis migrants and a large proportion of the negative sentiments is more reflected among the ordinary users. Users with many followers and non-governmental organisations (NGO) tend to tweet favourably about the topic, offsetting the distribution of negative sentiment. We opine that they can be encouraged to be more proactive in neutralising negative attitudes that may arise concerning similar incidences. |
2209.04171 | Anastasios Papazafeiropoulos | Anastasios Papazafeiropoulos, Ioannis Krikidis, Pandelis Kourtessis | Impact of Channel Aging on Reconfigurable Intelligent Surface Aided
Massive MIMO Systems with Statistical CSI | accepted in IEEE TVT | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | The incorporation of reconfigurable intelligent surface (RIS) into massive
multiple-input-multiple-output (mMIMO) systems can unleash the potential of
next-generation networks by improving the performance of user equipments (UEs)
in service dead zones. However, their requirement for accurate channel state
information (CSI) is critical, and especially, applications with UE mobility
that induce channel aging make challenging the achievement of adequate quality
of service. Hence, in this work, we investigate the impact of channel aging on
the performance of RIS-assisted mMIMO systems under both spatial correlation
and imperfect CSI conditions. Specifically, by accounting for channel aging
during both uplink training and downlink data transmission phases, we first
perform minimum mean square error (MMSE) channel estimation to obtain the UE
effective channels with low overhead similar to conventional systems without
RIS. Next, we derive the downlink achievable sum spectral efficiency (SE) with
regularized zero-forcing (RZF) precoding in closed-form being dependent only on
large-scale statistics by using the deterministic equivalent (DE) analysis.
Subsequently, we present the attractive optimization of the achievable sum SE
with respect to the phase shifts and the total transmit power that can be
performed every several coherence intervals due to the slow variation of the
large-scale statistics. Numerical results validate the analytical expressions
and demonstrate the performance while allowing the extraction of insightful
design conclusions for common scenarios including UE mobility. In particular,
channel aging degrades the performance but its impact can be controlled by
choosing appropriately the frame duration or by increasing the number of RIS
elements.
| [
{
"created": "Fri, 9 Sep 2022 08:10:23 GMT",
"version": "v1"
}
] | 2022-09-12 | [
[
"Papazafeiropoulos",
"Anastasios",
""
],
[
"Krikidis",
"Ioannis",
""
],
[
"Kourtessis",
"Pandelis",
""
]
] | The incorporation of reconfigurable intelligent surface (RIS) into massive multiple-input-multiple-output (mMIMO) systems can unleash the potential of next-generation networks by improving the performance of user equipments (UEs) in service dead zones. However, their requirement for accurate channel state information (CSI) is critical, and especially, applications with UE mobility that induce channel aging make challenging the achievement of adequate quality of service. Hence, in this work, we investigate the impact of channel aging on the performance of RIS-assisted mMIMO systems under both spatial correlation and imperfect CSI conditions. Specifically, by accounting for channel aging during both uplink training and downlink data transmission phases, we first perform minimum mean square error (MMSE) channel estimation to obtain the UE effective channels with low overhead similar to conventional systems without RIS. Next, we derive the downlink achievable sum spectral efficiency (SE) with regularized zero-forcing (RZF) precoding in closed-form being dependent only on large-scale statistics by using the deterministic equivalent (DE) analysis. Subsequently, we present the attractive optimization of the achievable sum SE with respect to the phase shifts and the total transmit power that can be performed every several coherence intervals due to the slow variation of the large-scale statistics. Numerical results validate the analytical expressions and demonstrate the performance while allowing the extraction of insightful design conclusions for common scenarios including UE mobility. In particular, channel aging degrades the performance but its impact can be controlled by choosing appropriately the frame duration or by increasing the number of RIS elements. |
2211.07875 | Ye Tao | Ye Tao, Yuze Jiang, Pengfei Lin, Manabu Tsukada and Hiroshi Esaki | zk-PoT: Zero-Knowledge Proof of Traffic for Privacy Enabled Cooperative
Perception | IEEE Consumer Communications & Networking Conference (CCNC) 2023 | null | null | null | cs.NI cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cooperative perception is an essential and widely discussed application of
connected automated vehicles. However, the authenticity of perception data is
not ensured, because the vehicles cannot independently verify the event they
did not see. Many methods, including trust-based (i.e., statistical) approaches
and plausibility-based methods, have been proposed to determine data
authenticity. However, these methods cannot verify data without a priori
knowledge. In this study, a novel approach of constructing a self-proving data
from the number plate of target vehicles was proposed. By regarding the
pseudonym and number plate as a shared secret and letting multiple vehicles
prove they know it independently, the data authenticity problem can be
transformed to a cryptography problem that can be solved without trust or
plausibility evaluations. Our work can be adapted to the existing works
including ETSI/ISO ITS standards while maintaining backward compatibility.
Analyses of common attacks and attacks specific to the proposed method reveal
that most attacks can be prevented, whereas preventing some other attacks, such
as collusion attacks, can be mitigated. Experiments based on realistic data set
show that the rate of successful verification can achieve 70\% to 80\% at rush
hours.
| [
{
"created": "Tue, 15 Nov 2022 03:50:08 GMT",
"version": "v1"
}
] | 2022-11-16 | [
[
"Tao",
"Ye",
""
],
[
"Jiang",
"Yuze",
""
],
[
"Lin",
"Pengfei",
""
],
[
"Tsukada",
"Manabu",
""
],
[
"Esaki",
"Hiroshi",
""
]
] | Cooperative perception is an essential and widely discussed application of connected automated vehicles. However, the authenticity of perception data is not ensured, because the vehicles cannot independently verify the event they did not see. Many methods, including trust-based (i.e., statistical) approaches and plausibility-based methods, have been proposed to determine data authenticity. However, these methods cannot verify data without a priori knowledge. In this study, a novel approach of constructing a self-proving data from the number plate of target vehicles was proposed. By regarding the pseudonym and number plate as a shared secret and letting multiple vehicles prove they know it independently, the data authenticity problem can be transformed to a cryptography problem that can be solved without trust or plausibility evaluations. Our work can be adapted to the existing works including ETSI/ISO ITS standards while maintaining backward compatibility. Analyses of common attacks and attacks specific to the proposed method reveal that most attacks can be prevented, whereas preventing some other attacks, such as collusion attacks, can be mitigated. Experiments based on realistic data set show that the rate of successful verification can achieve 70\% to 80\% at rush hours. |
2105.07855 | A Mallikarjuna Reddy dr | Swarajya lakshmi v papineni, A.Mallikarjuna Reddy, Sudeepti
yarlagadda, Snigdha Yarlagadda, Haritha Akkinen | An Extensive Analytical Approach on Human Resources using Random Forest
Algorithm | null | null | 10.14445/22315381/IJETT-V69I5P217 | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | The current job survey shows that most software employees are planning to
change their job role due to high pay for recent jobs such as data scientists,
business analysts and artificial intelligence fields. The survey also indicated
that work life imbalances, low pay, uneven shifts and many other factors also
make employees think about changing their work life. In this paper, for an
efficient organisation of the company in terms of human resources, the proposed
system designed a model with the help of a random forest algorithm by
considering different employee parameters. This helps the HR department retain
the employee by identifying gaps and helping the organisation to run smoothly
with a good employee retention ratio. This combination of HR and data science
can help the productivity, collaboration and well-being of employees of the
organisation. It also helps to develop strategies that have an impact on the
performance of employees in terms of external and social factors.
| [
{
"created": "Fri, 7 May 2021 07:35:23 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"papineni",
"Swarajya lakshmi v",
""
],
[
"Reddy",
"A. Mallikarjuna",
""
],
[
"yarlagadda",
"Sudeepti",
""
],
[
"Yarlagadda",
"Snigdha",
""
],
[
"Akkinen",
"Haritha",
""
]
] | The current job survey shows that most software employees are planning to change their job role due to high pay for recent jobs such as data scientists, business analysts and artificial intelligence fields. The survey also indicated that work life imbalances, low pay, uneven shifts and many other factors also make employees think about changing their work life. In this paper, for an efficient organisation of the company in terms of human resources, the proposed system designed a model with the help of a random forest algorithm by considering different employee parameters. This helps the HR department retain the employee by identifying gaps and helping the organisation to run smoothly with a good employee retention ratio. This combination of HR and data science can help the productivity, collaboration and well-being of employees of the organisation. It also helps to develop strategies that have an impact on the performance of employees in terms of external and social factors. |
1811.11700 | Richard Spence | Faryad Darabi Sahneh, Alon Efrat, Stephen Kobourov, Spencer Krieger,
Richard Spence | Approximation algorithms for the vertex-weighted grade-of-service
Steiner tree problem | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a graph $G = (V,E)$ and a subset $T \subseteq V$ of terminals, a
\emph{Steiner tree} of $G$ is a tree that spans $T$. In the vertex-weighted
Steiner tree (VST) problem, each vertex is assigned a non-negative weight, and
the goal is to compute a minimum weight Steiner tree of $G$.
We study a natural generalization of the VST problem motivated by multi-level
graph construction, the \emph{vertex-weighted grade-of-service Steiner tree
problem} (V-GSST), which can be stated as follows: given a graph $G$ and
terminals $T$, where each terminal $v \in T$ requires a facility of a minimum
grade of service $R(v)\in \{1,2,\ldots\ell\}$, compute a Steiner tree $G'$ by
installing facilities on a subset of vertices, such that any two vertices
requiring a certain grade of service are connected by a path in $G'$ with the
minimum grade of service or better. Facilities of higher grade are more costly
than facilities of lower grade. Multi-level variants such as this one can be
useful in network design problems where vertices may require facilities of
varying priority.
While similar problems have been studied in the edge-weighted case, they have
not been studied as well in the more general vertex-weighted case. We first
describe a simple heuristic for the V-GSST problem whose approximation ratio
depends on $\ell$, the number of grades of service. We then generalize the
greedy algorithm of [Klein \& Ravi, 1995] to show that the V-GSST problem
admits a $(2 \ln |T|)$-approximation, where $T$ is the set of terminals
requiring some facility. This result is surprising, as it shows that the
(seemingly harder) multi-grade problem can be approximated as well as the VST
problem, and that the approximation ratio does not depend on the number of
grades of service.
| [
{
"created": "Wed, 28 Nov 2018 17:37:13 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2019 23:02:41 GMT",
"version": "v2"
}
] | 2019-05-07 | [
[
"Sahneh",
"Faryad Darabi",
""
],
[
"Efrat",
"Alon",
""
],
[
"Kobourov",
"Stephen",
""
],
[
"Krieger",
"Spencer",
""
],
[
"Spence",
"Richard",
""
]
] | Given a graph $G = (V,E)$ and a subset $T \subseteq V$ of terminals, a \emph{Steiner tree} of $G$ is a tree that spans $T$. In the vertex-weighted Steiner tree (VST) problem, each vertex is assigned a non-negative weight, and the goal is to compute a minimum weight Steiner tree of $G$. We study a natural generalization of the VST problem motivated by multi-level graph construction, the \emph{vertex-weighted grade-of-service Steiner tree problem} (V-GSST), which can be stated as follows: given a graph $G$ and terminals $T$, where each terminal $v \in T$ requires a facility of a minimum grade of service $R(v)\in \{1,2,\ldots\ell\}$, compute a Steiner tree $G'$ by installing facilities on a subset of vertices, such that any two vertices requiring a certain grade of service are connected by a path in $G'$ with the minimum grade of service or better. Facilities of higher grade are more costly than facilities of lower grade. Multi-level variants such as this one can be useful in network design problems where vertices may require facilities of varying priority. While similar problems have been studied in the edge-weighted case, they have not been studied as well in the more general vertex-weighted case. We first describe a simple heuristic for the V-GSST problem whose approximation ratio depends on $\ell$, the number of grades of service. We then generalize the greedy algorithm of [Klein \& Ravi, 1995] to show that the V-GSST problem admits a $(2 \ln |T|)$-approximation, where $T$ is the set of terminals requiring some facility. This result is surprising, as it shows that the (seemingly harder) multi-grade problem can be approximated as well as the VST problem, and that the approximation ratio does not depend on the number of grades of service. |
2011.13118 | Xiaoxiao Long | Xiaoxiao Long, Lingjie Liu, Wei Li, Christian Theobalt, Wenping Wang | Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method for multi-view depth estimation from a single
video, which is a critical task in various applications, such as perception,
reconstruction and robot navigation. Although previous learning-based methods
have demonstrated compelling results, most works estimate depth maps of
individual video frames independently, without taking into consideration the
strong geometric and temporal coherence among the frames. Moreover, current
state-of-the-art (SOTA) models mostly adopt a fully 3D convolution network for
cost regularization and therefore require high computational cost, thus
limiting their deployment in real-world applications. Our method achieves
temporally coherent depth estimation results by using a novel Epipolar
Spatio-Temporal (EST) transformer to explicitly associate geometric and
temporal correlation with multiple estimated depth maps. Furthermore, to reduce
the computational cost, inspired by recent Mixture-of-Experts models, we design
a compact hybrid network consisting of a 2D context-aware network and a 3D
matching network which learn 2D context information and 3D disparity cues
separately. Extensive experiments demonstrate that our method achieves higher
accuracy in depth estimation and significant speedup than the SOTA methods.
| [
{
"created": "Thu, 26 Nov 2020 04:04:21 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 02:55:11 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jul 2021 16:02:54 GMT",
"version": "v3"
}
] | 2021-07-13 | [
[
"Long",
"Xiaoxiao",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Li",
"Wei",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Wang",
"Wenping",
""
]
] | We present a novel method for multi-view depth estimation from a single video, which is a critical task in various applications, such as perception, reconstruction and robot navigation. Although previous learning-based methods have demonstrated compelling results, most works estimate depth maps of individual video frames independently, without taking into consideration the strong geometric and temporal coherence among the frames. Moreover, current state-of-the-art (SOTA) models mostly adopt a fully 3D convolution network for cost regularization and therefore require high computational cost, thus limiting their deployment in real-world applications. Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer to explicitly associate geometric and temporal correlation with multiple estimated depth maps. Furthermore, to reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network consisting of a 2D context-aware network and a 3D matching network which learn 2D context information and 3D disparity cues separately. Extensive experiments demonstrate that our method achieves higher accuracy in depth estimation and significant speedup than the SOTA methods. |
2006.02471 | Julio C. S. Reis | Julio C. S. Reis, Philipe de Freitas Melo, Kiran Garimella, Fabr\'icio
Benevenuto | Can WhatsApp Benefit from Debunked Fact-Checked Stories to Reduce
Misinformation? | This is a preprint version of an accepted manuscript on The Harvard
Kennedy School (HKS) Misinformation Review. Please, consider to cite it
instead of this one | null | null | null | cs.CY cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WhatsApp was alleged to be widely used to spread misinformation and
propaganda during elections in Brazil and India. Due to the private encrypted
nature of the messages on WhatsApp, it is hard to track the dissemination of
misinformation at scale. In this work, using public WhatsApp data, we observe
that misinformation has been largely shared on WhatsApp public groups even
after they were already fact-checked by popular fact-checking agencies. This
represents a significant portion of misinformation spread in both Brazil and
India in the groups analyzed. We posit that such misinformation content could
be prevented if WhatsApp had a means to flag already fact-checked content. To
this end, we propose an architecture that could be implemented by WhatsApp to
counter such misinformation. Our proposal respects the current end-to-end
encryption architecture on WhatsApp, thus protecting users' privacy while
providing an approach to detect the misinformation that benefits from
fact-checking efforts.
| [
{
"created": "Wed, 3 Jun 2020 18:28:57 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 03:11:38 GMT",
"version": "v2"
}
] | 2020-08-07 | [
[
"Reis",
"Julio C. S.",
""
],
[
"Melo",
"Philipe de Freitas",
""
],
[
"Garimella",
"Kiran",
""
],
[
"Benevenuto",
"Fabrício",
""
]
] | WhatsApp was alleged to be widely used to spread misinformation and propaganda during elections in Brazil and India. Due to the private encrypted nature of the messages on WhatsApp, it is hard to track the dissemination of misinformation at scale. In this work, using public WhatsApp data, we observe that misinformation has been largely shared on WhatsApp public groups even after they were already fact-checked by popular fact-checking agencies. This represents a significant portion of misinformation spread in both Brazil and India in the groups analyzed. We posit that such misinformation content could be prevented if WhatsApp had a means to flag already fact-checked content. To this end, we propose an architecture that could be implemented by WhatsApp to counter such misinformation. Our proposal respects the current end-to-end encryption architecture on WhatsApp, thus protecting users' privacy while providing an approach to detect the misinformation that benefits from fact-checking efforts. |
1711.10639 | EPTCS | Hadi Ravanbakhsh (1), Sriram Sankaranarayanan (1) ((1) University of
Colorado, Boulder) | A Class of Control Certificates to Ensure Reach-While-Stay for Switched
Systems | In Proceedings SYNT 2017, arXiv:1711.10224 | EPTCS 260, 2017, pp. 44-61 | 10.4204/EPTCS.260.6 | null | cs.SY cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we consider the problem of synthesizing switching
controllers for temporal properties through the composition of simple primitive
reach-while-stay (RWS) properties. Reach-while-stay properties specify that the
system states starting from an initial set I, must reach a goal (target) set G
in finite time, while remaining inside a safe set S. Our approach synthesizes
switched controllers that select between finitely many modes to satisfy the
given RWS specification. To do so, we consider control certificates, which are
Lyapunov-like functions that represent control strategies to achieve the
desired specification. However, for RWS problems, a control Lyapunov-like
function is often hard to synthesize in a simple polynomial form. Therefore, we
combine control barrier and Lyapunov functions with an additional compatibility
condition between them. Using this approach, the controller synthesis problem
reduces to one of solving quantified nonlinear constrained problems that are
handled using a combination of SMT solvers. The synthesis of controllers is
demonstrated through a set of interesting numerical examples drawn from the
related work, and compared with the state-of-the-art tool SCOTS. Our evaluation
suggests that our approach is computationally feasible, and adds to the growing
body of formal approaches to controller synthesis.
| [
{
"created": "Wed, 29 Nov 2017 01:26:09 GMT",
"version": "v1"
}
] | 2017-11-30 | [
[
"Ravanbakhsh",
"Hadi",
""
],
[
"Sankaranarayanan",
"Sriram",
""
]
] | In this article, we consider the problem of synthesizing switching controllers for temporal properties through the composition of simple primitive reach-while-stay (RWS) properties. Reach-while-stay properties specify that the system states starting from an initial set I, must reach a goal (target) set G in finite time, while remaining inside a safe set S. Our approach synthesizes switched controllers that select between finitely many modes to satisfy the given RWS specification. To do so, we consider control certificates, which are Lyapunov-like functions that represent control strategies to achieve the desired specification. However, for RWS problems, a control Lyapunov-like function is often hard to synthesize in a simple polynomial form. Therefore, we combine control barrier and Lyapunov functions with an additional compatibility condition between them. Using this approach, the controller synthesis problem reduces to one of solving quantified nonlinear constrained problems that are handled using a combination of SMT solvers. The synthesis of controllers is demonstrated through a set of interesting numerical examples drawn from the related work, and compared with the state-of-the-art tool SCOTS. Our evaluation suggests that our approach is computationally feasible, and adds to the growing body of formal approaches to controller synthesis. |
2108.13892 | Liesbeth Allein | Liesbeth Allein, Marie-Francine Moens and Domenico Perrotta | Like Article, Like Audience: Enforcing Multimodal Correlations for
Disinformation Detection | null | null | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | User-generated content (e.g., tweets and profile descriptions) and shared
content between users (e.g., news articles) reflect a user's online identity.
This paper investigates whether correlations between user-generated and
user-shared content can be leveraged for detecting disinformation in online
news articles. We develop a multimodal learning algorithm for disinformation
detection. The latent representations of news articles and user-generated
content allow that during training the model is guided by the profile of users
who prefer content similar to the news article that is evaluated, and this
effect is reinforced if that content is shared among different users. By only
leveraging user information during model optimization, the model does not rely
on user profiling when predicting an article's veracity. The algorithm is
successfully applied to three widely used neural classifiers, and results are
obtained on different datasets. Visualization techniques show that the proposed
model learns feature representations of unseen news articles that better
discriminate between fake and real news texts.
| [
{
"created": "Tue, 31 Aug 2021 14:50:16 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Allein",
"Liesbeth",
""
],
[
"Moens",
"Marie-Francine",
""
],
[
"Perrotta",
"Domenico",
""
]
] | User-generated content (e.g., tweets and profile descriptions) and shared content between users (e.g., news articles) reflect a user's online identity. This paper investigates whether correlations between user-generated and user-shared content can be leveraged for detecting disinformation in online news articles. We develop a multimodal learning algorithm for disinformation detection. The latent representations of news articles and user-generated content allow that during training the model is guided by the profile of users who prefer content similar to the news article that is evaluated, and this effect is reinforced if that content is shared among different users. By only leveraging user information during model optimization, the model does not rely on user profiling when predicting an article's veracity. The algorithm is successfully applied to three widely used neural classifiers, and results are obtained on different datasets. Visualization techniques show that the proposed model learns feature representations of unseen news articles that better discriminate between fake and real news texts. |
2009.12724 | Shixian Wen | Shixian Wen, Amanda Rios, Laurent Itti | Beneficial Perturbations Network for Defending Adversarial Examples | The paper is under consideration at Pattern Recognition Letters | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks can be fooled by adversarial attacks: adding carefully
computed small adversarial perturbations to clean inputs can cause
misclassification on state-of-the-art machine learning models. The reason is
that neural networks fail to accommodate the distribution drift of the input
data caused by adversarial perturbations. Here, we present a new solution -
Beneficial Perturbation Network (BPN) - to defend against adversarial attacks
by fixing the distribution drift. During training, BPN generates and leverages
beneficial perturbations (somewhat opposite to well-known adversarial
perturbations) by adding new, out-of-network biasing units. Biasing units
influence the parameter space of the network, to preempt and neutralize future
adversarial perturbations on input data samples. To achieve this, BPN creates
reverse adversarial attacks during training, with very little cost, by
recycling the training gradients already computed. Reverse attacks are captured
by the biasing units, and the biases can in turn effectively defend against
future adversarial examples. Reverse attacks are a shortcut, i.e., they affect
the network's parameters without requiring instantiation of adversarial
examples that could assist training. We provide comprehensive empirical
evidence showing that 1) BPN is robust to adversarial examples and is much more
running memory and computationally efficient compared to classical adversarial
training. 2) BPN can defend against adversarial examples with negligible
additional computation and parameter costs compared to training only on clean
examples; 3) BPN hurts the accuracy on clean examples much less than classic
adversarial training; 4) BPN can improve the generalization of the network 5)
BPN trained only with Fast Gradient Sign Attack can generalize to defend PGD
attacks.
| [
{
"created": "Sun, 27 Sep 2020 02:05:26 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2021 07:25:51 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Sep 2021 13:05:55 GMT",
"version": "v3"
}
] | 2021-09-14 | [
[
"Wen",
"Shixian",
""
],
[
"Rios",
"Amanda",
""
],
[
"Itti",
"Laurent",
""
]
] | Deep neural networks can be fooled by adversarial attacks: adding carefully computed small adversarial perturbations to clean inputs can cause misclassification on state-of-the-art machine learning models. The reason is that neural networks fail to accommodate the distribution drift of the input data caused by adversarial perturbations. Here, we present a new solution - Beneficial Perturbation Network (BPN) - to defend against adversarial attacks by fixing the distribution drift. During training, BPN generates and leverages beneficial perturbations (somewhat opposite to well-known adversarial perturbations) by adding new, out-of-network biasing units. Biasing units influence the parameter space of the network, to preempt and neutralize future adversarial perturbations on input data samples. To achieve this, BPN creates reverse adversarial attacks during training, with very little cost, by recycling the training gradients already computed. Reverse attacks are captured by the biasing units, and the biases can in turn effectively defend against future adversarial examples. Reverse attacks are a shortcut, i.e., they affect the network's parameters without requiring instantiation of adversarial examples that could assist training. We provide comprehensive empirical evidence showing that 1) BPN is robust to adversarial examples and is much more running memory and computationally efficient compared to classical adversarial training. 2) BPN can defend against adversarial examples with negligible additional computation and parameter costs compared to training only on clean examples; 3) BPN hurts the accuracy on clean examples much less than classic adversarial training; 4) BPN can improve the generalization of the network 5) BPN trained only with Fast Gradient Sign Attack can generalize to defend PGD attacks. |
2101.03438 | Junde Li | Junde Li, Rasit Topaloglu, Swaroop Ghosh | Quantum Generative Models for Small Molecule Drug Discovery | null | null | null | null | cs.ET cs.LG quant-ph | http://creativecommons.org/licenses/by/4.0/ | Existing drug discovery pipelines take 5-10 years and cost billions of
dollars. Computational approaches aim to sample from regions of the whole
molecular and solid-state compounds called chemical space which could be on the
order of 1060 . Deep generative models can model the underlying probability
distribution of both the physical structures and property of drugs and relate
them nonlinearly. By exploiting patterns in massive datasets, these models can
distill salient features that characterize the molecules. Generative
Adversarial Networks (GANs) discover drug candidates by generating molecular
structures that obey chemical and physical properties and show affinity towards
binding with the receptor for a target disease. However, classical GANs cannot
explore certain regions of the chemical space and suffer from
curse-of-dimensionality. A full quantum GAN may require more than 90 qubits
even to generate QM9-like small molecules. We propose a qubit-efficient quantum
GAN with a hybrid generator (QGAN-HG) to learn richer representation of
molecules via searching exponentially large chemical space with few qubits more
efficiently than classical GAN. The QGANHG model is composed of a hybrid
quantum generator that supports various number of qubits and quantum circuit
layers, and, a classical discriminator. QGAN-HG with only 14.93% retained
parameters can learn molecular distribution as efficiently as classical
counterpart. The QGAN-HG variation with patched circuits considerably
accelerates our standard QGANHG training process and avoids potential gradient
vanishing issue of deep neural networks. Code is available on GitHub
https://github.com/jundeli/quantum-gan.
| [
{
"created": "Sat, 9 Jan 2021 22:33:16 GMT",
"version": "v1"
}
] | 2021-01-12 | [
[
"Li",
"Junde",
""
],
[
"Topaloglu",
"Rasit",
""
],
[
"Ghosh",
"Swaroop",
""
]
] | Existing drug discovery pipelines take 5-10 years and cost billions of dollars. Computational approaches aim to sample from regions of the whole molecular and solid-state compounds called chemical space which could be on the order of 1060 . Deep generative models can model the underlying probability distribution of both the physical structures and property of drugs and relate them nonlinearly. By exploiting patterns in massive datasets, these models can distill salient features that characterize the molecules. Generative Adversarial Networks (GANs) discover drug candidates by generating molecular structures that obey chemical and physical properties and show affinity towards binding with the receptor for a target disease. However, classical GANs cannot explore certain regions of the chemical space and suffer from curse-of-dimensionality. A full quantum GAN may require more than 90 qubits even to generate QM9-like small molecules. We propose a qubit-efficient quantum GAN with a hybrid generator (QGAN-HG) to learn richer representation of molecules via searching exponentially large chemical space with few qubits more efficiently than classical GAN. The QGANHG model is composed of a hybrid quantum generator that supports various number of qubits and quantum circuit layers, and, a classical discriminator. QGAN-HG with only 14.93% retained parameters can learn molecular distribution as efficiently as classical counterpart. The QGAN-HG variation with patched circuits considerably accelerates our standard QGANHG training process and avoids potential gradient vanishing issue of deep neural networks. Code is available on GitHub https://github.com/jundeli/quantum-gan. |
1602.01168 | Zhuolin Jiang | Zhuolin Jiang, Yaming Wang, Larry Davis, Walt Andrews, Viktor Rozgic | Learning Discriminative Features via Label Consistent Neural Network | null | null | null | null | cs.CV cs.LG cs.MM cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Convolutional Neural Networks (CNN) enforces supervised information only
at the output layer, and hidden layers are trained by back propagating the
prediction error from the output layer without explicit supervision. We propose
a supervised feature learning approach, Label Consistent Neural Network, which
enforces direct supervision in late hidden layers. We associate each neuron in
a hidden layer with a particular class label and encourage it to be activated
for input signals from the same class. More specifically, we introduce a label
consistency regularization called "discriminative representation error" loss
for late hidden layers and combine it with classification error loss to build
our overall objective function. This label consistency constraint alleviates
the common problem of gradient vanishing and tends to faster convergence; it
also makes the features derived from late hidden layers discriminative enough
for classification even using a simple $k$-NN classifier, since input signals
from the same class will have very similar representations. Experimental
results demonstrate that our approach achieves state-of-the-art performances on
several public benchmarks for action and object category recognition.
| [
{
"created": "Wed, 3 Feb 2016 02:41:33 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Jun 2016 02:45:35 GMT",
"version": "v2"
}
] | 2016-06-07 | [
[
"Jiang",
"Zhuolin",
""
],
[
"Wang",
"Yaming",
""
],
[
"Davis",
"Larry",
""
],
[
"Andrews",
"Walt",
""
],
[
"Rozgic",
"Viktor",
""
]
] | Deep Convolutional Neural Networks (CNN) enforces supervised information only at the output layer, and hidden layers are trained by back propagating the prediction error from the output layer without explicit supervision. We propose a supervised feature learning approach, Label Consistent Neural Network, which enforces direct supervision in late hidden layers. We associate each neuron in a hidden layer with a particular class label and encourage it to be activated for input signals from the same class. More specifically, we introduce a label consistency regularization called "discriminative representation error" loss for late hidden layers and combine it with classification error loss to build our overall objective function. This label consistency constraint alleviates the common problem of gradient vanishing and tends to faster convergence; it also makes the features derived from late hidden layers discriminative enough for classification even using a simple $k$-NN classifier, since input signals from the same class will have very similar representations. Experimental results demonstrate that our approach achieves state-of-the-art performances on several public benchmarks for action and object category recognition. |
1403.0338 | Sanjaya Kumar Panda | Jitendra Kumar Rout, Sourav Kumar Bhoi, Sanjaya Kumar Panda | SFTP : A Secure and Fault-Tolerant Paradigm against Blackhole Attack in
MANET | 6 pages, 9 figures | International Journal of Computer Applications 2013 | 10.5120/10623-5343 | pxc3885343 | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Security issues in MANET are a challenging task nowadays. MANETs are
vulnerable to passive attacks and active attacks because of a limited number of
resources and lack of centralized authority. Blackhole attack is an attack in
network layer which degrade the network performance by dropping the packets. In
this paper, we have proposed a Secure Fault-Tolerant Paradigm (SFTP) which
checks the Blackhole attack in the network. The three phases used in SFTP
algorithm are designing of coverage area to find the area of coverage, Network
Connection algorithm to design a fault-tolerant model and Route Discovery
algorithm to discover the route and data delivery from source to destination.
SFTP gives better network performance by making the network fault free.
| [
{
"created": "Mon, 3 Mar 2014 08:36:22 GMT",
"version": "v1"
}
] | 2014-03-04 | [
[
"Rout",
"Jitendra Kumar",
""
],
[
"Bhoi",
"Sourav Kumar",
""
],
[
"Panda",
"Sanjaya Kumar",
""
]
] | Security issues in MANET are a challenging task nowadays. MANETs are vulnerable to passive attacks and active attacks because of a limited number of resources and lack of centralized authority. Blackhole attack is an attack in network layer which degrade the network performance by dropping the packets. In this paper, we have proposed a Secure Fault-Tolerant Paradigm (SFTP) which checks the Blackhole attack in the network. The three phases used in SFTP algorithm are designing of coverage area to find the area of coverage, Network Connection algorithm to design a fault-tolerant model and Route Discovery algorithm to discover the route and data delivery from source to destination. SFTP gives better network performance by making the network fault free. |
2104.00538 | Timur \.Inan | Inan Timur, Baba Ahmet Fevzi | Prediction of Wind Speed Using Artificial Neural Networks and ANFIS
Methods (Observation Buoy Example) | 5 pages, in Turkish language | null | null | null | cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimation of the wind speed plays an important role in many issues such as
route determination of ships, efficient use of wind roses, and correct planning
of agricultural activities. In this study, wind velocity estimation is
calculated using artificial neural networks (ANN) and adaptive artificial
neural fuzzy inference system (ANFIS) methods. The data required for estimation
was obtained from the float named E1M3A, which is a float inside the POSEIDON
float system. The proposed ANN is a Nonlinear Auto Regressive with External
Input (NARX) type of artificial neural network with 3 layers, 50 neurons, 6
inputs and 1 output. The ANFIS system introduced is a fuzzy inference system
with 6 inputs, 1 output, and 3 membership functions (MF) per input. The
proposed systems were trained to make wind speed estimates after 3 hours and
the data obtained were obtained and the successes of the systems were revealed
by comparing the obtained values with real measurements. Mean Squarred Error
(MSE) and the regression between the predictions and expected values (R) were
used to evaluate the success of the estimation values obtained from the
systems. According to estimation results, ANN achieved 2.19 MSE and 0.897 R
values in training, 2.88 MSE and 0.866 R values in validation, and 2.93 MSE and
0.857 R values in testing. ANFIS method has obtained 0.31634 MSE and 0.99 R
values
| [
{
"created": "Mon, 29 Mar 2021 19:01:43 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Timur",
"Inan",
""
],
[
"Fevzi",
"Baba Ahmet",
""
]
] | Estimation of the wind speed plays an important role in many issues such as route determination of ships, efficient use of wind roses, and correct planning of agricultural activities. In this study, wind velocity estimation is calculated using artificial neural networks (ANN) and adaptive artificial neural fuzzy inference system (ANFIS) methods. The data required for estimation was obtained from the float named E1M3A, which is a float inside the POSEIDON float system. The proposed ANN is a Nonlinear Auto Regressive with External Input (NARX) type of artificial neural network with 3 layers, 50 neurons, 6 inputs and 1 output. The ANFIS system introduced is a fuzzy inference system with 6 inputs, 1 output, and 3 membership functions (MF) per input. The proposed systems were trained to make wind speed estimates after 3 hours and the data obtained were obtained and the successes of the systems were revealed by comparing the obtained values with real measurements. Mean Squarred Error (MSE) and the regression between the predictions and expected values (R) were used to evaluate the success of the estimation values obtained from the systems. According to estimation results, ANN achieved 2.19 MSE and 0.897 R values in training, 2.88 MSE and 0.866 R values in validation, and 2.93 MSE and 0.857 R values in testing. ANFIS method has obtained 0.31634 MSE and 0.99 R values |
2002.06761 | Yongming Li | Yongming Li, Yan Lei, Pin Wang, Yuchuan Liu | Hybrid Embedded Deep Stacked Sparse Autoencoder with w_LPPD SVM Ensemble | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning is a kind of feature learning method with strong nonliear
feature transformation and becomes more and more important in many fields of
artificial intelligence. Deep autoencoder is one representative method of the
deep learning methods, and can effectively extract abstract the information of
datasets. However, it does not consider the complementarity between the deep
features and original features during deep feature transformation. Besides, it
suffers from small sample problem. In order to solve these problems, a novel
deep autoencoder - hybrid feature embedded stacked sparse autoencoder(HESSAE)
has been proposed in this paper. HFESAE is capable to learn discriminant deep
features with the help of embedding original features to filter weak
hidden-layer outputs during training. For the issue that class representation
ability of abstract information is limited by small sample problem, a feature
fusion strategy has been designed aiming to combining abstract information
learned by HFESAE with original feature and obtain hybrid features for feature
reduction. The strategy is hybrid feature selection strategy based on L1
regularization followed by an support vector machine(SVM) ensemble model, in
which weighted local discriminant preservation projection (w_LPPD), is designed
and employed on each base classifier. At the end of this paper, several
representative public datasets are used to verify the effectiveness of the
proposed algorithm. The experimental results demonstrated that, the proposed
feature learning method yields superior performance compared to other existing
and state of art feature learning algorithms including some representative deep
autoencoder methods.
| [
{
"created": "Mon, 17 Feb 2020 04:06:05 GMT",
"version": "v1"
}
] | 2020-02-18 | [
[
"Li",
"Yongming",
""
],
[
"Lei",
"Yan",
""
],
[
"Wang",
"Pin",
""
],
[
"Liu",
"Yuchuan",
""
]
] | Deep learning is a kind of feature learning method with strong nonliear feature transformation and becomes more and more important in many fields of artificial intelligence. Deep autoencoder is one representative method of the deep learning methods, and can effectively extract abstract the information of datasets. However, it does not consider the complementarity between the deep features and original features during deep feature transformation. Besides, it suffers from small sample problem. In order to solve these problems, a novel deep autoencoder - hybrid feature embedded stacked sparse autoencoder(HESSAE) has been proposed in this paper. HFESAE is capable to learn discriminant deep features with the help of embedding original features to filter weak hidden-layer outputs during training. For the issue that class representation ability of abstract information is limited by small sample problem, a feature fusion strategy has been designed aiming to combining abstract information learned by HFESAE with original feature and obtain hybrid features for feature reduction. The strategy is hybrid feature selection strategy based on L1 regularization followed by an support vector machine(SVM) ensemble model, in which weighted local discriminant preservation projection (w_LPPD), is designed and employed on each base classifier. At the end of this paper, several representative public datasets are used to verify the effectiveness of the proposed algorithm. The experimental results demonstrated that, the proposed feature learning method yields superior performance compared to other existing and state of art feature learning algorithms including some representative deep autoencoder methods. |
2209.14795 | Salman Manzoor | Salman Manzoor and Antonios Gouglidis and Matthew Bradbury and Neeraj
Suri | ThreatPro: Multi-Layer Threat Analysis in the Cloud | 32 pages, 14 figures | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Many effective Threat Analysis (TA) techniques exist that focus on analyzing
threats to targeted assets (e.g., components, services). These techniques
consider static interconnections among the assets. However, in dynamic
environments, such as the Cloud, resources can instantiate, migrate across
physical hosts, or decommission to provide rapid resource elasticity to the
users. It is evident that existing TA techniques cannot address all these
requirements. In addition, there is an increasing number of complex
multi-layer/multi-asset attacks on Cloud systems, such as the Equifax data
breach. Hence, there is a need for threat analysis approaches that are designed
to analyze threats in complex, dynamic, and multi-layer Cloud environments. In
this paper, we propose ThreatPro that addresses the analysis of multi-layer
attacks and supports dynamic interconnections in the Cloud. ThreatPro
facilitates threat analysis by developing a technology-agnostic information
flow model, which represents the Cloud's functionality through a set of
conditional transitions. The model establishes the basis to capture the
multi-layer and dynamic interconnections during the life-cycle of a Virtual
Machine (VM). Specifically, ThreatPro contributes in (a) enabling the
exploration of a threat's behavior and its propagation across the Cloud, and
(b) assessing the security of the Cloud by analyzing the impact of multiple
threats across various operational layers/assets. Using public information on
threats from the National Vulnerability Database (NVD), we validate ThreatPro's
capabilities, i.e., (a) identify and trace actual Cloud attacks and (b)
speculatively postulate alternate potential attack paths.
| [
{
"created": "Thu, 29 Sep 2022 14:00:55 GMT",
"version": "v1"
}
] | 2022-09-30 | [
[
"Manzoor",
"Salman",
""
],
[
"Gouglidis",
"Antonios",
""
],
[
"Bradbury",
"Matthew",
""
],
[
"Suri",
"Neeraj",
""
]
] | Many effective Threat Analysis (TA) techniques exist that focus on analyzing threats to targeted assets (e.g., components, services). These techniques consider static interconnections among the assets. However, in dynamic environments, such as the Cloud, resources can instantiate, migrate across physical hosts, or decommission to provide rapid resource elasticity to the users. It is evident that existing TA techniques cannot address all these requirements. In addition, there is an increasing number of complex multi-layer/multi-asset attacks on Cloud systems, such as the Equifax data breach. Hence, there is a need for threat analysis approaches that are designed to analyze threats in complex, dynamic, and multi-layer Cloud environments. In this paper, we propose ThreatPro that addresses the analysis of multi-layer attacks and supports dynamic interconnections in the Cloud. ThreatPro facilitates threat analysis by developing a technology-agnostic information flow model, which represents the Cloud's functionality through a set of conditional transitions. The model establishes the basis to capture the multi-layer and dynamic interconnections during the life-cycle of a Virtual Machine (VM). Specifically, ThreatPro contributes in (a) enabling the exploration of a threat's behavior and its propagation across the Cloud, and (b) assessing the security of the Cloud by analyzing the impact of multiple threats across various operational layers/assets. Using public information on threats from the National Vulnerability Database (NVD), we validate ThreatPro's capabilities, i.e., (a) identify and trace actual Cloud attacks and (b) speculatively postulate alternate potential attack paths. |
2206.05282 | Ian Covert | Ian Covert, Chanwoo Kim, Su-In Lee | Learning to Estimate Shapley Values with Vision Transformers | ICLR 2023 camera-ready | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformers have become a default architecture in computer vision, but
understanding what drives their predictions remains a challenging problem.
Current explanation approaches rely on attention values or input gradients, but
these provide a limited view of a model's dependencies. Shapley values offer a
theoretically sound alternative, but their computational cost makes them
impractical for large, high-dimensional models. In this work, we aim to make
Shapley values practical for vision transformers (ViTs). To do so, we first
leverage an attention masking approach to evaluate ViTs with partial
information, and we then develop a procedure to generate Shapley value
explanations via a separate, learned explainer model. Our experiments compare
Shapley values to many baseline methods (e.g., attention rollout, GradCAM,
LRP), and we find that our approach provides more accurate explanations than
existing methods for ViTs.
| [
{
"created": "Fri, 10 Jun 2022 07:09:28 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Sep 2022 08:49:55 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Mar 2023 20:24:58 GMT",
"version": "v3"
}
] | 2023-03-03 | [
[
"Covert",
"Ian",
""
],
[
"Kim",
"Chanwoo",
""
],
[
"Lee",
"Su-In",
""
]
] | Transformers have become a default architecture in computer vision, but understanding what drives their predictions remains a challenging problem. Current explanation approaches rely on attention values or input gradients, but these provide a limited view of a model's dependencies. Shapley values offer a theoretically sound alternative, but their computational cost makes them impractical for large, high-dimensional models. In this work, we aim to make Shapley values practical for vision transformers (ViTs). To do so, we first leverage an attention masking approach to evaluate ViTs with partial information, and we then develop a procedure to generate Shapley value explanations via a separate, learned explainer model. Our experiments compare Shapley values to many baseline methods (e.g., attention rollout, GradCAM, LRP), and we find that our approach provides more accurate explanations than existing methods for ViTs. |
2204.06598 | Sheng He | Sheng He, Yanfang Feng, P. Ellen Grant, Yangming Ou | Deep Relation Learning for Regression and Its Application to Brain Age
Estimation | null | IEEE Transactions on Medical Imaging. 2022 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most deep learning models for temporal regression directly output the
estimation based on single input images, ignoring the relationships between
different images. In this paper, we propose deep relation learning for
regression, aiming to learn different relations between a pair of input images.
Four non-linear relations are considered: "cumulative relation", "relative
relation", "maximal relation" and "minimal relation". These four relations are
learned simultaneously from one deep neural network which has two parts:
feature extraction and relation regression. We use an efficient convolutional
neural network to extract deep features from the pair of input images and apply
a Transformer for relation learning. The proposed method is evaluated on a
merged dataset with 6,049 subjects with ages of 0-97 years using 5-fold
cross-validation for the task of brain age estimation. The experimental results
have shown that the proposed method achieved a mean absolute error (MAE) of
2.38 years, which is lower than the MAEs of 8 other state-of-the-art algorithms
with statistical significance (p$<$0.05) in paired T-test (two-side).
| [
{
"created": "Wed, 13 Apr 2022 18:40:34 GMT",
"version": "v1"
}
] | 2022-04-15 | [
[
"He",
"Sheng",
""
],
[
"Feng",
"Yanfang",
""
],
[
"Grant",
"P. Ellen",
""
],
[
"Ou",
"Yangming",
""
]
] | Most deep learning models for temporal regression directly output the estimation based on single input images, ignoring the relationships between different images. In this paper, we propose deep relation learning for regression, aiming to learn different relations between a pair of input images. Four non-linear relations are considered: "cumulative relation", "relative relation", "maximal relation" and "minimal relation". These four relations are learned simultaneously from one deep neural network which has two parts: feature extraction and relation regression. We use an efficient convolutional neural network to extract deep features from the pair of input images and apply a Transformer for relation learning. The proposed method is evaluated on a merged dataset with 6,049 subjects with ages of 0-97 years using 5-fold cross-validation for the task of brain age estimation. The experimental results have shown that the proposed method achieved a mean absolute error (MAE) of 2.38 years, which is lower than the MAEs of 8 other state-of-the-art algorithms with statistical significance (p$<$0.05) in paired T-test (two-side). |
1408.3469 | Steven Weber | Nan Xie, John MacLaren Walsh, Steven Weber | Properties of an Aloha-like stability region | 28 pages, 9 figures. Submitted August 15, 2014, revised September 21,
2015 and August 31, 2016, and accepted November 06, 2016 for publication in
IEEE Transactions on Information Theory. Preliminary results presented at
ISIT 2010, ITA 2010, and ITA 2011. DOI: 10.1109/TIT.2016.2640302. Copyright
transferred to IEEE. This is last version uploaded by the authors prior to
IEEE proofing process | null | 10.1109/TIT.2016.2640302 | null | cs.IT cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A well-known inner bound on the stability region of the finite-user slotted
Aloha protocol is the set of all arrival rates for which there exists some
choice of the contention probabilities such that the associated worst-case
service rate for each user exceeds the user's arrival rate, denoted $\Lambda$.
Although testing membership in $\Lambda$ of a given arrival rate can be posed
as a convex program, it is nonetheless of interest to understand the properties
of this set. In this paper we develop new results of this nature, including
$i)$ an equivalence between membership in $\Lambda$ and the existence of a
positive root of a given polynomial, $ii)$ a method to construct a vector of
contention probabilities to stabilize any stabilizable arrival rate vector,
$iii)$ the volume of $\Lambda$, $iv)$ explicit polyhedral, spherical, and
ellipsoid inner and outer bounds on $\Lambda$, and $v)$ characterization of the
generalized convexity properties of a natural ``excess rate'' function
associated with $\Lambda$, including the convexity of the set of contention
probabilities that stabilize a given arrival rate vector.
| [
{
"created": "Fri, 15 Aug 2014 05:28:52 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Jan 2017 20:20:40 GMT",
"version": "v2"
}
] | 2017-01-06 | [
[
"Xie",
"Nan",
""
],
[
"Walsh",
"John MacLaren",
""
],
[
"Weber",
"Steven",
""
]
] | A well-known inner bound on the stability region of the finite-user slotted Aloha protocol is the set of all arrival rates for which there exists some choice of the contention probabilities such that the associated worst-case service rate for each user exceeds the user's arrival rate, denoted $\Lambda$. Although testing membership in $\Lambda$ of a given arrival rate can be posed as a convex program, it is nonetheless of interest to understand the properties of this set. In this paper we develop new results of this nature, including $i)$ an equivalence between membership in $\Lambda$ and the existence of a positive root of a given polynomial, $ii)$ a method to construct a vector of contention probabilities to stabilize any stabilizable arrival rate vector, $iii)$ the volume of $\Lambda$, $iv)$ explicit polyhedral, spherical, and ellipsoid inner and outer bounds on $\Lambda$, and $v)$ characterization of the generalized convexity properties of a natural ``excess rate'' function associated with $\Lambda$, including the convexity of the set of contention probabilities that stabilize a given arrival rate vector. |
1305.4367 | Raphael Jolly | Rapha\"el Jolly | Parallelizing Stream with Future | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stream is re-interpreted in terms of a Lazy monad. Future is substituted for
Lazy in the obtained construct, resulting in possible parallelization of any
algorithm expressible as a Stream computation. The principle is tested against
two example algorithms. Performance is evaluated, and a way to improve it
briefly discussed.
| [
{
"created": "Sun, 19 May 2013 15:00:14 GMT",
"version": "v1"
}
] | 2013-05-21 | [
[
"Jolly",
"Raphaël",
""
]
] | Stream is re-interpreted in terms of a Lazy monad. Future is substituted for Lazy in the obtained construct, resulting in possible parallelization of any algorithm expressible as a Stream computation. The principle is tested against two example algorithms. Performance is evaluated, and a way to improve it briefly discussed. |
1904.05005 | Xun Yang | Xun Yang, Meng Wang, Dacheng Tao | Person Re-identification with Metric Learning using Privileged
Information | Accepted for IEEE TIP | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the promising progress made in recent years, person re-identification
remains a challenging task due to complex variations in human appearances from
different camera views. This paper presents a logistic discriminant metric
learning method for this challenging problem. Different with most existing
metric learning algorithms, it exploits both original data and auxiliary data
during training, which is motivated by the new machine learning paradigm -
Learning Using Privileged Information. Such privileged information is a kind of
auxiliary knowledge which is only available during training. Our goal is to
learn an optimal distance function by constructing a locally adaptive decision
rule with the help of privileged information. We jointly learn two distance
metrics by minimizing the empirical loss penalizing the difference between the
distance in the original space and that in the privileged space. In our
setting, the distance in the privileged space functions as a local decision
threshold, which guides the decision making in the original space like a
teacher. The metric learned from the original space is used to compute the
distance between a probe image and a gallery image during testing. In addition,
we extend the proposed approach to a multi-view setting which is able to
explore the complementation of multiple feature representations. In the
multi-view setting, multiple metrics corresponding to different original
features are jointly learned, guided by the same privileged information.
Besides, an effective iterative optimization scheme is introduced to
simultaneously optimize the metrics and the assigned metric weights. Experiment
results on several widely-used datasets demonstrate that the proposed approach
is superior to global decision threshold based methods and outperforms most
state-of-the-art results.
| [
{
"created": "Wed, 10 Apr 2019 05:01:28 GMT",
"version": "v1"
}
] | 2019-04-11 | [
[
"Yang",
"Xun",
""
],
[
"Wang",
"Meng",
""
],
[
"Tao",
"Dacheng",
""
]
] | Despite the promising progress made in recent years, person re-identification remains a challenging task due to complex variations in human appearances from different camera views. This paper presents a logistic discriminant metric learning method for this challenging problem. Different with most existing metric learning algorithms, it exploits both original data and auxiliary data during training, which is motivated by the new machine learning paradigm - Learning Using Privileged Information. Such privileged information is a kind of auxiliary knowledge which is only available during training. Our goal is to learn an optimal distance function by constructing a locally adaptive decision rule with the help of privileged information. We jointly learn two distance metrics by minimizing the empirical loss penalizing the difference between the distance in the original space and that in the privileged space. In our setting, the distance in the privileged space functions as a local decision threshold, which guides the decision making in the original space like a teacher. The metric learned from the original space is used to compute the distance between a probe image and a gallery image during testing. In addition, we extend the proposed approach to a multi-view setting which is able to explore the complementation of multiple feature representations. In the multi-view setting, multiple metrics corresponding to different original features are jointly learned, guided by the same privileged information. Besides, an effective iterative optimization scheme is introduced to simultaneously optimize the metrics and the assigned metric weights. Experiment results on several widely-used datasets demonstrate that the proposed approach is superior to global decision threshold based methods and outperforms most state-of-the-art results. |
1903.09203 | Seyyed Ali Hashemi | Seyyed Ali Hashemi, Carlo Condo, Marco Mondelli, Warren J. Gross | Rate-Flexible Fast Polar Decoders | null | null | 10.1109/TSP.2019.2944738 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Polar codes have gained extensive attention during the past few years and
recently they have been selected for the next generation of wireless
communications standards (5G). Successive-cancellation-based (SC-based)
decoders, such as SC list (SCL) and SC flip (SCF), provide a reasonable error
performance for polar codes at the cost of low decoding speed. Fast SC-based
decoders, such as Fast-SSC, Fast-SSCL, and Fast-SSCF, identify the special
constituent codes in a polar code graph off-line, produce a list of operations,
store the list in memory, and feed the list to the decoder to decode the
constituent codes in order efficiently, thus increasing the decoding speed.
However, the list of operations is dependent on the code rate and as the rate
changes, a new list is produced, making fast SC-based decoders not
rate-flexible. In this paper, we propose a completely rate-flexible fast
SC-based decoder by creating the list of operations directly in hardware, with
low implementation complexity. We further propose a hardware architecture
implementing the proposed method and show that the area occupation of the
rate-flexible fast SC-based decoder in this paper is only $38\%$ of the total
area of the memory-based base-line decoder when 5G code rates are supported.
| [
{
"created": "Thu, 21 Mar 2019 19:06:51 GMT",
"version": "v1"
}
] | 2020-01-08 | [
[
"Hashemi",
"Seyyed Ali",
""
],
[
"Condo",
"Carlo",
""
],
[
"Mondelli",
"Marco",
""
],
[
"Gross",
"Warren J.",
""
]
] | Polar codes have gained extensive attention during the past few years and recently they have been selected for the next generation of wireless communications standards (5G). Successive-cancellation-based (SC-based) decoders, such as SC list (SCL) and SC flip (SCF), provide a reasonable error performance for polar codes at the cost of low decoding speed. Fast SC-based decoders, such as Fast-SSC, Fast-SSCL, and Fast-SSCF, identify the special constituent codes in a polar code graph off-line, produce a list of operations, store the list in memory, and feed the list to the decoder to decode the constituent codes in order efficiently, thus increasing the decoding speed. However, the list of operations is dependent on the code rate and as the rate changes, a new list is produced, making fast SC-based decoders not rate-flexible. In this paper, we propose a completely rate-flexible fast SC-based decoder by creating the list of operations directly in hardware, with low implementation complexity. We further propose a hardware architecture implementing the proposed method and show that the area occupation of the rate-flexible fast SC-based decoder in this paper is only $38\%$ of the total area of the memory-based base-line decoder when 5G code rates are supported. |
1910.12783 | Lingzhou Hong | Lingzhou Hong, Alfredo Garcia, and Ceyhun Eksin | Distributed Networked Learning with Correlated Data | 36 pages | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We consider a distributed estimation method in a setting with heterogeneous
streams of correlated data distributed across nodes in a network. In the
considered approach, linear models are estimated locally (i.e., with only local
data) subject to a network regularization term that penalizes a local model
that differs from neighboring models. We analyze computation dynamics
(associated with stochastic gradient updates) and information exchange
(associated with exchanging current models with neighboring nodes). We provide
a finite-time characterization of convergence of the weighted ensemble average
estimate and compare this result to federated learning, an alternative approach
to estimation wherein a single model is updated by locally generated gradient
updates. This comparison highlights the trade-off between speed vs precision:
while model updates take place at a faster rate in federated learning, the
proposed networked approach to estimation enables the identification of models
with higher precision. We illustrate the method's general applicability in two
examples: estimating a Markov random field using wireless sensor networks and
modeling prey escape behavior of flocking birds based on a publicly available
dataset.
| [
{
"created": "Mon, 28 Oct 2019 16:14:02 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 23:38:45 GMT",
"version": "v2"
}
] | 2021-02-11 | [
[
"Hong",
"Lingzhou",
""
],
[
"Garcia",
"Alfredo",
""
],
[
"Eksin",
"Ceyhun",
""
]
] | We consider a distributed estimation method in a setting with heterogeneous streams of correlated data distributed across nodes in a network. In the considered approach, linear models are estimated locally (i.e., with only local data) subject to a network regularization term that penalizes a local model that differs from neighboring models. We analyze computation dynamics (associated with stochastic gradient updates) and information exchange (associated with exchanging current models with neighboring nodes). We provide a finite-time characterization of convergence of the weighted ensemble average estimate and compare this result to federated learning, an alternative approach to estimation wherein a single model is updated by locally generated gradient updates. This comparison highlights the trade-off between speed vs precision: while model updates take place at a faster rate in federated learning, the proposed networked approach to estimation enables the identification of models with higher precision. We illustrate the method's general applicability in two examples: estimating a Markov random field using wireless sensor networks and modeling prey escape behavior of flocking birds based on a publicly available dataset. |
1711.01214 | Olivier Auber | Olivier Auber | Refounding legitimacy towards Aethogenesis | Proceedings of 18th International Research Conference in The
Planetary Collegium's Series 'Art & consciousness in the post-biological era'
Shanghai 2015. 9 pages. 4 figures | Technoetic Arts Volume 14 Number 3 December 2016 pp. 235-249(15) | 10.1386/tear.14.3.235_1 | null | cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | The fusion of humans and technology takes us into an unknown world described
by some authors as populated by quasi living species that would relegate us -
ordinary humans - to the rank of alienated agents emptied of our identity and
consciousness. I argue instead that our world is woven of simple though
invisible perspectives which - if we become aware of them - may renew our
ability for making judgments and enhance our autonomy. I became aware of these
invisible perspectives by observing and practicing a real time collective net
art experiment called the Poietic Generator. As the perspectives unveiled by
this experiment are invisible I have called them anoptical perspectives i.e.
non-optical by analogy with the optical perspective of the Renaissance. Later I
have come to realize that these perspectives obtain their cognitive structure
from the political origins of our language. Accordingly it is possible to
define certain cognitive criteria for assessing the legitimacy of the anoptical
perspectives just like some artists and architects of the Renaissance defined
the geometrical criteria that established the legitimacy of the optical one.
| [
{
"created": "Fri, 3 Nov 2017 15:49:03 GMT",
"version": "v1"
}
] | 2017-11-06 | [
[
"Auber",
"Olivier",
""
]
] | The fusion of humans and technology takes us into an unknown world described by some authors as populated by quasi living species that would relegate us - ordinary humans - to the rank of alienated agents emptied of our identity and consciousness. I argue instead that our world is woven of simple though invisible perspectives which - if we become aware of them - may renew our ability for making judgments and enhance our autonomy. I became aware of these invisible perspectives by observing and practicing a real time collective net art experiment called the Poietic Generator. As the perspectives unveiled by this experiment are invisible I have called them anoptical perspectives i.e. non-optical by analogy with the optical perspective of the Renaissance. Later I have come to realize that these perspectives obtain their cognitive structure from the political origins of our language. Accordingly it is possible to define certain cognitive criteria for assessing the legitimacy of the anoptical perspectives just like some artists and architects of the Renaissance defined the geometrical criteria that established the legitimacy of the optical one. |
2404.16380 | Zuocheng Wen | Zuocheng Wen and Lingzhong Guo | Efficient Higher-order Convolution for Small Kernels in Deep Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (DCNNs) are a class of artificial neural
networks, primarily for computer vision tasks such as segmentation and
classification. Many nonlinear operations, such as activation functions and
pooling strategies, are used in DCNNs to enhance their ability to process
different signals with different tasks. Conceptional convolution, a linear
filter, is the essential component of DCNNs while nonlinear convolution is
generally implemented as higher-order Volterra filters, However, for Volterra
filtering, significant memory and computational costs pose a primary limitation
for its widespread application in DCNN applications. In this study, we propose
a novel method to perform higher-order Volterra filtering with lower memory and
computation cost in forward and backward pass in DCNN training. The proposed
method demonstrates computational advantages compared with conventional
Volterra filter implementation. Furthermore, based on the proposed method, a
new attention module called Higher-order Local Attention Block (HLA) is
proposed and tested on CIFAR-100 dataset, which shows competitive improvement
for classification task. Source code is available at:
https://github.com/WinterWen666/Efficient-High-Order-Volterra-Convolution.git
| [
{
"created": "Thu, 25 Apr 2024 07:42:48 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Wen",
"Zuocheng",
""
],
[
"Guo",
"Lingzhong",
""
]
] | Deep convolutional neural networks (DCNNs) are a class of artificial neural networks, primarily for computer vision tasks such as segmentation and classification. Many nonlinear operations, such as activation functions and pooling strategies, are used in DCNNs to enhance their ability to process different signals with different tasks. Conceptional convolution, a linear filter, is the essential component of DCNNs while nonlinear convolution is generally implemented as higher-order Volterra filters, However, for Volterra filtering, significant memory and computational costs pose a primary limitation for its widespread application in DCNN applications. In this study, we propose a novel method to perform higher-order Volterra filtering with lower memory and computation cost in forward and backward pass in DCNN training. The proposed method demonstrates computational advantages compared with conventional Volterra filter implementation. Furthermore, based on the proposed method, a new attention module called Higher-order Local Attention Block (HLA) is proposed and tested on CIFAR-100 dataset, which shows competitive improvement for classification task. Source code is available at: https://github.com/WinterWen666/Efficient-High-Order-Volterra-Convolution.git |
2009.11840 | Martin Kouteck\'y | Martin Kouteck\'y and Johannes Zink | Complexity of Scheduling Few Types of Jobs on Related and Unrelated
Machines | null | null | null | null | cs.DS cs.CC math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of scheduling jobs to machines while minimizing the total makespan,
the sum of weighted completion times, or a norm of the load vector, are among
the oldest and most fundamental tasks in combinatorial optimization. Since all
of these problems are in general NP-hard, much attention has been given to the
regime where there is only a small number $k$ of job types, but possibly the
number of jobs $n$ is large; this is the few job types, high-multiplicity
regime. Despite many positive results, the hardness boundary of this regime was
not understood until now.
We show that makespan minimization on uniformly related machines
($Q|HM|C_{\max}$) is NP-hard already with $6$ job types, and that the related
Cutting Stock problem is NP-hard already with $8$ item types. For the more
general unrelated machines model ($R|HM|C_{\max}$), we show that if either the
largest job size $p_{\max}$, or the number of jobs $n$ are polynomially bounded
in the instance size $|I|$, there are algorithms with complexity
$|I|^{\textrm{poly}(k)}$. Our main result is that this is unlikely to be
improved, because $Q||C_{\max}$ is W[1]-hard parameterized by $k$ already when
$n$, $p_{\max}$, and the numbers describing the speeds are polynomial in $|I|$;
the same holds for $R|HM|C_{\max}$ (without speeds) when the job sizes matrix
has rank $2$. Our positive and negative results also extend to the objectives
$\ell_2$-norm minimization of the load vector and, partially, sum of weighted
completion times $\sum w_j C_j$.
Along the way, we answer affirmatively the question whether makespan
minimization on identical machines ($P||C_{\max}$) is fixed-parameter tractable
parameterized by $k$, extending our understanding of this fundamental problem.
Together with our hardness results for $Q||C_{\max}$ this implies that the
complexity of $P|HM|C_{\max}$ is the only remaining open case.
| [
{
"created": "Thu, 24 Sep 2020 17:38:31 GMT",
"version": "v1"
}
] | 2020-09-25 | [
[
"Koutecký",
"Martin",
""
],
[
"Zink",
"Johannes",
""
]
] | The task of scheduling jobs to machines while minimizing the total makespan, the sum of weighted completion times, or a norm of the load vector, are among the oldest and most fundamental tasks in combinatorial optimization. Since all of these problems are in general NP-hard, much attention has been given to the regime where there is only a small number $k$ of job types, but possibly the number of jobs $n$ is large; this is the few job types, high-multiplicity regime. Despite many positive results, the hardness boundary of this regime was not understood until now. We show that makespan minimization on uniformly related machines ($Q|HM|C_{\max}$) is NP-hard already with $6$ job types, and that the related Cutting Stock problem is NP-hard already with $8$ item types. For the more general unrelated machines model ($R|HM|C_{\max}$), we show that if either the largest job size $p_{\max}$, or the number of jobs $n$ are polynomially bounded in the instance size $|I|$, there are algorithms with complexity $|I|^{\textrm{poly}(k)}$. Our main result is that this is unlikely to be improved, because $Q||C_{\max}$ is W[1]-hard parameterized by $k$ already when $n$, $p_{\max}$, and the numbers describing the speeds are polynomial in $|I|$; the same holds for $R|HM|C_{\max}$ (without speeds) when the job sizes matrix has rank $2$. Our positive and negative results also extend to the objectives $\ell_2$-norm minimization of the load vector and, partially, sum of weighted completion times $\sum w_j C_j$. Along the way, we answer affirmatively the question whether makespan minimization on identical machines ($P||C_{\max}$) is fixed-parameter tractable parameterized by $k$, extending our understanding of this fundamental problem. Together with our hardness results for $Q||C_{\max}$ this implies that the complexity of $P|HM|C_{\max}$ is the only remaining open case. |
2208.04713 | Sreekrishnan Venkateswaran | Sreekrishnan Venkateswaran | Reflections on the Evolution of Computer Science Education | Preprint Edition of the paper published in ACM SIGSOFT Software
Engineering Notes (SEN), Volume 47, Issue 3, July 2022
(https://doi.org/10.1145/3539814.3539817) | ACM SIGSOFT Software Engineering Notes (SEN), Volume 47, Issue 3,
July2022 | 10.1145/3539814.3539817 | Volume 47, Issue 3, July 2022 | cs.CY cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer Science education has been evolving over the years to reflect
applied realities. Until about a decade ago, theory of computation, algorithm
design and system software dominated the curricula. Most courses were
considered core and were hence mandatory; the programme structure did not allow
much of a choice or variety. This column analyses why this changed Circa 2010
when elective subjects across scores of topics become part of mainstream
education to reflect the on-going lateral acceleration of Computer Science.
Fundamental discoveries in artificial intelligence, machine learning,
virtualization and cloud computing are several decades old. Many core theories
in data science are centuries old. Yet their leverage exploded only after Circa
2010, when the stage got set for people-centric problem solving in massive
scale. This was due in part to the rush of innovative real-world applications
that reached the common man through the ubiquitous smart phone. AI/ML modules
arrived in popular programming languages; they could be used to build and train
models on powerful - yet affordable - compute on public clouds reachable
through high-speed Internet connectivity. Academia responded by adapting
Computer Science curricula to align it with the changing technology landscape.
The goal of this experiential piece is to trigger a lively discussion on the
past and future of Computer Science education.
| [
{
"created": "Sat, 9 Jul 2022 07:07:12 GMT",
"version": "v1"
}
] | 2022-08-10 | [
[
"Venkateswaran",
"Sreekrishnan",
""
]
] | Computer Science education has been evolving over the years to reflect applied realities. Until about a decade ago, theory of computation, algorithm design and system software dominated the curricula. Most courses were considered core and were hence mandatory; the programme structure did not allow much of a choice or variety. This column analyses why this changed Circa 2010 when elective subjects across scores of topics become part of mainstream education to reflect the on-going lateral acceleration of Computer Science. Fundamental discoveries in artificial intelligence, machine learning, virtualization and cloud computing are several decades old. Many core theories in data science are centuries old. Yet their leverage exploded only after Circa 2010, when the stage got set for people-centric problem solving in massive scale. This was due in part to the rush of innovative real-world applications that reached the common man through the ubiquitous smart phone. AI/ML modules arrived in popular programming languages; they could be used to build and train models on powerful - yet affordable - compute on public clouds reachable through high-speed Internet connectivity. Academia responded by adapting Computer Science curricula to align it with the changing technology landscape. The goal of this experiential piece is to trigger a lively discussion on the past and future of Computer Science education. |
2305.01864 | Cem Subakan | Zhepei Wang, Cem Subakan, Krishna Subramani, Junkai Wu, Tiago Tavares,
Fabio Ayres, Paris Smaragdis | Unsupervised Improvement of Audio-Text Cross-Modal Representations | Accepted to WASPAA 2023 | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in using language models to obtain cross-modal audio-text
representations have overcome the limitations of conventional training
approaches that use predefined labels. This has allowed the community to make
progress in tasks like zero-shot classification, which would otherwise not be
possible. However, learning such representations requires a large amount of
human-annotated audio-text pairs. In this paper, we study unsupervised
approaches to improve the learning framework of such representations with
unpaired text and audio. We explore domain-unspecific and domain-specific
curation methods to create audio-text pairs that we use to further improve the
model. We also show that when domain-specific curation is used in conjunction
with a soft-labeled contrastive loss, we are able to obtain significant
improvement in terms of zero-shot classification performance on downstream
sound event classification or acoustic scene classification tasks.
| [
{
"created": "Wed, 3 May 2023 02:30:46 GMT",
"version": "v1"
},
{
"created": "Fri, 5 May 2023 02:22:49 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Jul 2023 18:28:36 GMT",
"version": "v3"
}
] | 2023-08-02 | [
[
"Wang",
"Zhepei",
""
],
[
"Subakan",
"Cem",
""
],
[
"Subramani",
"Krishna",
""
],
[
"Wu",
"Junkai",
""
],
[
"Tavares",
"Tiago",
""
],
[
"Ayres",
"Fabio",
""
],
[
"Smaragdis",
"Paris",
""
]
] | Recent advances in using language models to obtain cross-modal audio-text representations have overcome the limitations of conventional training approaches that use predefined labels. This has allowed the community to make progress in tasks like zero-shot classification, which would otherwise not be possible. However, learning such representations requires a large amount of human-annotated audio-text pairs. In this paper, we study unsupervised approaches to improve the learning framework of such representations with unpaired text and audio. We explore domain-unspecific and domain-specific curation methods to create audio-text pairs that we use to further improve the model. We also show that when domain-specific curation is used in conjunction with a soft-labeled contrastive loss, we are able to obtain significant improvement in terms of zero-shot classification performance on downstream sound event classification or acoustic scene classification tasks. |
1309.4508 | Abdul Razaque | Nyembo Salama, Christian Bach | Introduction of 6th Generation Smart Phone combining the features of
both Apple and Android smart phone | 10 pages, i figure | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present our novel contribution methodology based on the
results of case study that has been implemented in our research environment to
test with new technique the usability of both intelligent Android and Apple
phones. This analysis of the case study stands for features similar to
applications, operating system, hardware and software structure, battery life,
and online based websites. Multiple interrogations were applied to collect user
answers ongoing features. Users directly react by responding based on their
daily used product experience. Consequently, the estimation is based on the
data that has been unregistered from the user. The most recent results will end
up by introducing a combination of ideal features on both products to build a
wonderful extended product in the future.
| [
{
"created": "Wed, 18 Sep 2013 00:22:34 GMT",
"version": "v1"
}
] | 2013-09-19 | [
[
"Salama",
"Nyembo",
""
],
[
"Bach",
"Christian",
""
]
] | In this paper, we present our novel contribution methodology based on the results of case study that has been implemented in our research environment to test with new technique the usability of both intelligent Android and Apple phones. This analysis of the case study stands for features similar to applications, operating system, hardware and software structure, battery life, and online based websites. Multiple interrogations were applied to collect user answers ongoing features. Users directly react by responding based on their daily used product experience. Consequently, the estimation is based on the data that has been unregistered from the user. The most recent results will end up by introducing a combination of ideal features on both products to build a wonderful extended product in the future. |
2406.17106 | David Mezey | David Mezey, Renaud Bastien, Yating Zheng, Neal McKee, David Stoll,
Heiko Hamann, Pawel Romanczuk | Purely vision-based collective movement of robots | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Collective movement inspired by animal groups promises inherited benefits for
robot swarms, such as enhanced sensing and efficiency. However, while animals
move in groups using only their local senses, robots often obey central control
or use direct communication, introducing systemic weaknesses to the swarm. In
the hope of addressing such vulnerabilities, developing bio-inspired
decentralized swarms has been a major focus in recent decades. Yet, creating
robots that move efficiently together using only local sensory information
remains an extraordinary challenge. In this work, we present a decentralized,
purely vision-based swarm of terrestrial robots. Within this novel framework
robots achieve collisionless, polarized motion exclusively through minimal
visual interactions, computing everything on board based on their individual
camera streams, making central processing or direct communication obsolete.
With agent-based simulations, we further show that using this model, even with
a strictly limited field of view and within confined spaces, ordered group
motion can emerge, while also highlighting key limitations. Our results offer a
multitude of practical applications from hybrid societies coordinating
collective movement without any common communication protocol, to advanced,
decentralized vision-based robot swarms capable of diverse tasks in
ever-changing environments.
| [
{
"created": "Mon, 24 Jun 2024 19:47:13 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Mezey",
"David",
""
],
[
"Bastien",
"Renaud",
""
],
[
"Zheng",
"Yating",
""
],
[
"McKee",
"Neal",
""
],
[
"Stoll",
"David",
""
],
[
"Hamann",
"Heiko",
""
],
[
"Romanczuk",
"Pawel",
""
]
] | Collective movement inspired by animal groups promises inherited benefits for robot swarms, such as enhanced sensing and efficiency. However, while animals move in groups using only their local senses, robots often obey central control or use direct communication, introducing systemic weaknesses to the swarm. In the hope of addressing such vulnerabilities, developing bio-inspired decentralized swarms has been a major focus in recent decades. Yet, creating robots that move efficiently together using only local sensory information remains an extraordinary challenge. In this work, we present a decentralized, purely vision-based swarm of terrestrial robots. Within this novel framework robots achieve collisionless, polarized motion exclusively through minimal visual interactions, computing everything on board based on their individual camera streams, making central processing or direct communication obsolete. With agent-based simulations, we further show that using this model, even with a strictly limited field of view and within confined spaces, ordered group motion can emerge, while also highlighting key limitations. Our results offer a multitude of practical applications from hybrid societies coordinating collective movement without any common communication protocol, to advanced, decentralized vision-based robot swarms capable of diverse tasks in ever-changing environments. |
2205.15508 | Jianheng Tang | Jianheng Tang, Jiajin Li, Ziqi Gao, Jia Li | Rethinking Graph Neural Networks for Anomaly Detection | Accepted by ICML 2022. Our code and data are released at
https://github.com/squareRoot3/Rethinking-Anomaly-Detection | null | null | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) are widely applied for graph anomaly detection.
As one of the key components for GNN design is to select a tailored spectral
filter, we take the first step towards analyzing anomalies via the lens of the
graph spectrum. Our crucial observation is the existence of anomalies will lead
to the `right-shift' phenomenon, that is, the spectral energy distribution
concentrates less on low frequencies and more on high frequencies. This fact
motivates us to propose the Beta Wavelet Graph Neural Network (BWGNN). Indeed,
BWGNN has spectral and spatial localized band-pass filters to better handle the
`right-shift' phenomenon in anomalies. We demonstrate the effectiveness of
BWGNN on four large-scale anomaly detection datasets. Our code and data are
released at https://github.com/squareRoot3/Rethinking-Anomaly-Detection
| [
{
"created": "Tue, 31 May 2022 02:39:05 GMT",
"version": "v1"
}
] | 2022-06-01 | [
[
"Tang",
"Jianheng",
""
],
[
"Li",
"Jiajin",
""
],
[
"Gao",
"Ziqi",
""
],
[
"Li",
"Jia",
""
]
] | Graph Neural Networks (GNNs) are widely applied for graph anomaly detection. As one of the key components for GNN design is to select a tailored spectral filter, we take the first step towards analyzing anomalies via the lens of the graph spectrum. Our crucial observation is the existence of anomalies will lead to the `right-shift' phenomenon, that is, the spectral energy distribution concentrates less on low frequencies and more on high frequencies. This fact motivates us to propose the Beta Wavelet Graph Neural Network (BWGNN). Indeed, BWGNN has spectral and spatial localized band-pass filters to better handle the `right-shift' phenomenon in anomalies. We demonstrate the effectiveness of BWGNN on four large-scale anomaly detection datasets. Our code and data are released at https://github.com/squareRoot3/Rethinking-Anomaly-Detection |
1701.03753 | Lifeng Wang | Anqi He, Lifeng Wang, Yue Chen, Kai-Kit Wong, and Maged Elkashlan | Spectral and Energy Efficiency of Uplink D2D Underlaid Massive MIMO
Cellular Networks | Accepted by IEEE Transactions on Communications | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of key 5G scenarios is that device-to-device (D2D) and massive
multiple-input multiple-output (MIMO) will be co-existed. However, interference
in the uplink D2D underlaid massive MIMO cellular networks needs to be
coordinated, due to the vast cellular and D2D transmissions. To this end, this
paper introduces a spatially dynamic power control solution for mitigating the
cellular-to-D2D and D2D-to-cellular interference. In particular, the proposed
D2D power control policy is rather flexible including the special cases of no
D2D links or using maximum transmit power. Under the considered power control,
an analytical approach is developed to evaluate the spectral efficiency (SE)
and energy efficiency (EE) in such networks. Thus, the exact expressions of SE
for a cellular user or D2D transmitter are derived, which quantify the impacts
of key system parameters such as massive MIMO antennas and D2D density.
Moreover, the D2D scale properties are obtained, which provide the sufficient
conditions for achieving the anticipated SE. Numerical results corroborate our
analysis and show that the proposed power control solution can efficiently
mitigate interference between the cellular and D2D tier. The results
demonstrate that there exists the optimal D2D density for maximizing the area
SE of D2D tier. In addition, the achievable EE of a cellular user can be
comparable to that of a D2D user.
| [
{
"created": "Fri, 13 Jan 2017 17:48:30 GMT",
"version": "v1"
},
{
"created": "Wed, 17 May 2017 15:36:27 GMT",
"version": "v2"
},
{
"created": "Mon, 29 May 2017 16:20:58 GMT",
"version": "v3"
}
] | 2017-05-30 | [
[
"He",
"Anqi",
""
],
[
"Wang",
"Lifeng",
""
],
[
"Chen",
"Yue",
""
],
[
"Wong",
"Kai-Kit",
""
],
[
"Elkashlan",
"Maged",
""
]
] | One of key 5G scenarios is that device-to-device (D2D) and massive multiple-input multiple-output (MIMO) will be co-existed. However, interference in the uplink D2D underlaid massive MIMO cellular networks needs to be coordinated, due to the vast cellular and D2D transmissions. To this end, this paper introduces a spatially dynamic power control solution for mitigating the cellular-to-D2D and D2D-to-cellular interference. In particular, the proposed D2D power control policy is rather flexible including the special cases of no D2D links or using maximum transmit power. Under the considered power control, an analytical approach is developed to evaluate the spectral efficiency (SE) and energy efficiency (EE) in such networks. Thus, the exact expressions of SE for a cellular user or D2D transmitter are derived, which quantify the impacts of key system parameters such as massive MIMO antennas and D2D density. Moreover, the D2D scale properties are obtained, which provide the sufficient conditions for achieving the anticipated SE. Numerical results corroborate our analysis and show that the proposed power control solution can efficiently mitigate interference between the cellular and D2D tier. The results demonstrate that there exists the optimal D2D density for maximizing the area SE of D2D tier. In addition, the achievable EE of a cellular user can be comparable to that of a D2D user. |
2106.06614 | Harald Woracek | Ana Sokolova, Harald Woracek | Nawrotzki's Algorithm for the Countable Splitting Lemma, Constructively | null | null | null | null | cs.LO math.PR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We reprove the countable splitting lemma by adapting Nawrotzki's algorithm
which produces a sequence that converges to a solution. Our algorithm combines
Nawrotzki's approach with taking finite cuts. It is constructive in the sense
that each term of the iteratively built approximating sequence as well as the
error between the approximants and the solution is computable with finitely
many algebraic operations.
| [
{
"created": "Fri, 11 Jun 2021 21:18:44 GMT",
"version": "v1"
}
] | 2021-06-15 | [
[
"Sokolova",
"Ana",
""
],
[
"Woracek",
"Harald",
""
]
] | We reprove the countable splitting lemma by adapting Nawrotzki's algorithm which produces a sequence that converges to a solution. Our algorithm combines Nawrotzki's approach with taking finite cuts. It is constructive in the sense that each term of the iteratively built approximating sequence as well as the error between the approximants and the solution is computable with finitely many algebraic operations. |
2407.09015 | Van-Giang Trinh | Van-Giang Trinh, Belaid Benhamou | Static Analysis of Logic Programs via Boolean Networks | null | null | null | null | cs.LO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Answer Set Programming (ASP) is a declarative problem solving paradigm that
can be used to encode a combinatorial problem as a logic program whose stable
models correspond to the solutions of the considered problem. ASP has been
widely applied to various domains in AI and beyond. The question "What can be
said about stable models of a logic program from its static information?" has
been investigated and proved useful in many circumstances. In this work, we
dive into this direction more deeply by making the connection between a logic
program and a Boolean network, which is a prominent modeling framework with
applications to various areas. The proposed connection can bring the existing
results in the rich history on static analysis of Boolean networks to explore
and prove more theoretical results on ASP, making it become a unified and
powerful tool to further study the static analysis of ASP. In particular, the
newly obtained insights have the potential to benefit many problems in the
field of ASP.
| [
{
"created": "Fri, 12 Jul 2024 06:07:05 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Trinh",
"Van-Giang",
""
],
[
"Benhamou",
"Belaid",
""
]
] | Answer Set Programming (ASP) is a declarative problem solving paradigm that can be used to encode a combinatorial problem as a logic program whose stable models correspond to the solutions of the considered problem. ASP has been widely applied to various domains in AI and beyond. The question "What can be said about stable models of a logic program from its static information?" has been investigated and proved useful in many circumstances. In this work, we dive into this direction more deeply by making the connection between a logic program and a Boolean network, which is a prominent modeling framework with applications to various areas. The proposed connection can bring the existing results in the rich history on static analysis of Boolean networks to explore and prove more theoretical results on ASP, making it become a unified and powerful tool to further study the static analysis of ASP. In particular, the newly obtained insights have the potential to benefit many problems in the field of ASP. |
0808.1641 | Sudhakar Sahoo | Sudhakar Sahoo, Pabitra Pal Choudhury, Mithun Chakraborty | Characterization Of any Non-linear Boolean function Using A Set of
Linear Operators | 12 pages, 4 figures, 2 table. Submitted for possible publication in
the International Journal of Computer Mathematics and Applications, July 2008 | null | null | null | cs.CC nlin.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global dynamics of a non-linear Cellular Automata is, in general irregular,
asymmetric and unpredictable as opposed to that of a linear CA, which is highly
systematic and tractable. In the past efforts have been made to systematize
non-linear CA evolutions in the light of Boolean derivatives and Jacobian
Matrices. In this paper two different efforts have been made: first we try to
systematize non-linear CA evolution in the light of deviant states and
non-deviant states. For all the non-deviant states the nearest linear rule
matrix is applicable where as for the deviant states we have a set of other
matrices. Second using algebraic manipulation, an efficient algorithm is
proposed by which every Non-linear Boolean function can be characterized by a
sequence of binary matrices.
| [
{
"created": "Tue, 12 Aug 2008 11:04:47 GMT",
"version": "v1"
}
] | 2008-08-13 | [
[
"Sahoo",
"Sudhakar",
""
],
[
"Choudhury",
"Pabitra Pal",
""
],
[
"Chakraborty",
"Mithun",
""
]
] | Global dynamics of a non-linear Cellular Automata is, in general irregular, asymmetric and unpredictable as opposed to that of a linear CA, which is highly systematic and tractable. In the past efforts have been made to systematize non-linear CA evolutions in the light of Boolean derivatives and Jacobian Matrices. In this paper two different efforts have been made: first we try to systematize non-linear CA evolution in the light of deviant states and non-deviant states. For all the non-deviant states the nearest linear rule matrix is applicable where as for the deviant states we have a set of other matrices. Second using algebraic manipulation, an efficient algorithm is proposed by which every Non-linear Boolean function can be characterized by a sequence of binary matrices. |
1407.1667 | Sumit Nain | Sumit Nain (Rice University), Yoad Lustig (Rice University), Moshe Y
Vardi (Rice University) | Synthesis from Probabilistic Components | null | Logical Methods in Computer Science, Volume 10, Issue 2 (June 30,
2014) lmcs:1181 | 10.2168/LMCS-10(2:17)2014 | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthesis is the automatic construction of a system from its specification.
In classical synthesis algorithms, it is always assumed that the system is
"constructed from scratch" rather than composed from reusable components. This,
of course, rarely happens in real life, where almost every non-trivial
commercial software system relies heavily on using libraries of reusable
components. Furthermore, other contexts, such as web-service orchestration, can
be modeled as synthesis of a system from a library of components. Recently,
Lustig and Vardi introduced dataflow and control-flow synthesis from libraries
of reusable components. They proved that dataflow synthesis is undecidable,
while control-flow synthesis is decidable. In this work, we consider the
problem of control-flow synthesis from libraries of probabilistic components .
We show that this more general problem is also decidable.
| [
{
"created": "Mon, 7 Jul 2014 11:10:16 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jul 2014 12:52:57 GMT",
"version": "v2"
}
] | 2015-07-01 | [
[
"Nain",
"Sumit",
"",
"Rice University"
],
[
"Lustig",
"Yoad",
"",
"Rice University"
],
[
"Vardi",
"Moshe Y",
"",
"Rice University"
]
] | Synthesis is the automatic construction of a system from its specification. In classical synthesis algorithms, it is always assumed that the system is "constructed from scratch" rather than composed from reusable components. This, of course, rarely happens in real life, where almost every non-trivial commercial software system relies heavily on using libraries of reusable components. Furthermore, other contexts, such as web-service orchestration, can be modeled as synthesis of a system from a library of components. Recently, Lustig and Vardi introduced dataflow and control-flow synthesis from libraries of reusable components. They proved that dataflow synthesis is undecidable, while control-flow synthesis is decidable. In this work, we consider the problem of control-flow synthesis from libraries of probabilistic components . We show that this more general problem is also decidable. |
2203.10290 | Shi Hu | Shi Hu, Eric Nalisnick, Max Welling | Adversarial Defense via Image Denoising with Chaotic Encryption | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the literature on adversarial examples, white box and black box attacks
have received the most attention. The adversary is assumed to have either full
(white) or no (black) access to the defender's model. In this work, we focus on
the equally practical gray box setting, assuming an attacker has partial
information. We propose a novel defense that assumes everything but a private
key will be made available to the attacker. Our framework uses an image
denoising procedure coupled with encryption via a discretized Baker map.
Extensive testing against adversarial images (e.g. FGSM, PGD) crafted using
various gradients shows that our defense achieves significantly better results
on CIFAR-10 and CIFAR-100 than the state-of-the-art gray box defenses in both
natural and adversarial accuracy.
| [
{
"created": "Sat, 19 Mar 2022 10:25:02 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Hu",
"Shi",
""
],
[
"Nalisnick",
"Eric",
""
],
[
"Welling",
"Max",
""
]
] | In the literature on adversarial examples, white box and black box attacks have received the most attention. The adversary is assumed to have either full (white) or no (black) access to the defender's model. In this work, we focus on the equally practical gray box setting, assuming an attacker has partial information. We propose a novel defense that assumes everything but a private key will be made available to the attacker. Our framework uses an image denoising procedure coupled with encryption via a discretized Baker map. Extensive testing against adversarial images (e.g. FGSM, PGD) crafted using various gradients shows that our defense achieves significantly better results on CIFAR-10 and CIFAR-100 than the state-of-the-art gray box defenses in both natural and adversarial accuracy. |
1710.00811 | Aaron Tuor | Aaron Tuor, Samuel Kaplan, Brian Hutchinson, Nicole Nichols, Sean
Robinson | Deep Learning for Unsupervised Insider Threat Detection in Structured
Cybersecurity Data Streams | Proceedings of AI for Cyber Security Workshop at AAAI 2017 | null | null | null | cs.NE cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of an organization's computer network activity is a key component of
early detection and mitigation of insider threat, a growing concern for many
organizations. Raw system logs are a prototypical example of streaming data
that can quickly scale beyond the cognitive power of a human analyst. As a
prospective filter for the human analyst, we present an online unsupervised
deep learning approach to detect anomalous network activity from system logs in
real time. Our models decompose anomaly scores into the contributions of
individual user behavior features for increased interpretability to aid
analysts reviewing potential cases of insider threat. Using the CERT Insider
Threat Dataset v6.2 and threat detection recall as our performance metric, our
novel deep and recurrent neural network models outperform Principal Component
Analysis, Support Vector Machine and Isolation Forest based anomaly detection
baselines. For our best model, the events labeled as insider threat activity in
our dataset had an average anomaly score in the 95.53 percentile, demonstrating
our approach's potential to greatly reduce analyst workloads.
| [
{
"created": "Mon, 2 Oct 2017 17:54:28 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2017 20:53:03 GMT",
"version": "v2"
}
] | 2017-12-19 | [
[
"Tuor",
"Aaron",
""
],
[
"Kaplan",
"Samuel",
""
],
[
"Hutchinson",
"Brian",
""
],
[
"Nichols",
"Nicole",
""
],
[
"Robinson",
"Sean",
""
]
] | Analysis of an organization's computer network activity is a key component of early detection and mitigation of insider threat, a growing concern for many organizations. Raw system logs are a prototypical example of streaming data that can quickly scale beyond the cognitive power of a human analyst. As a prospective filter for the human analyst, we present an online unsupervised deep learning approach to detect anomalous network activity from system logs in real time. Our models decompose anomaly scores into the contributions of individual user behavior features for increased interpretability to aid analysts reviewing potential cases of insider threat. Using the CERT Insider Threat Dataset v6.2 and threat detection recall as our performance metric, our novel deep and recurrent neural network models outperform Principal Component Analysis, Support Vector Machine and Isolation Forest based anomaly detection baselines. For our best model, the events labeled as insider threat activity in our dataset had an average anomaly score in the 95.53 percentile, demonstrating our approach's potential to greatly reduce analyst workloads. |
2009.06381 | Anas Blasi | Mohammed A. Alsuwaiket, Anas H. Blasi, Khawla Altarawneh | Refining Student Marks based on Enrolled Modules Assessment Methods
using Data Mining Techniques | arXiv admin note: substantial text overlap with arXiv:2008.13255 | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Choosing the right and effective way to assess students is one of the most
important tasks of higher education. Many studies have shown that students tend
to receive higher scores during their studies when assessed by different study
methods which include units that are fully assessed by varying the duration of
study or a combination of courses and exams than by exams alone. Many
Educational Data Mining studies process data in advance through traditional
data extraction, including the data preparation process. In this paper, we
propose a different data preparation process by investigating more than 230000
student records for the preparation of scores. The data have been processed
through diverse stages in order to extract a categorical factor through which
students module marks are refined during the data preparation stage. The
results of this work show that students final marks should not be isolated from
the nature of the enrolled module assessment methods. They must rather be
investigated thoroughly and considered during EDM data preprocessing stage.
More generally, educational data should not be prepared in the same way normal
data are due to the differences in data sources, applications, and error types.
The effect of Module Assessment Index on the prediction process using Random
Forest and Naive Bayes classification techniques were investigated. It was
shown that considering MAI as attribute increases the accuracy of predicting
students second year averages based on their first year averages.
| [
{
"created": "Sun, 30 Aug 2020 19:47:45 GMT",
"version": "v1"
}
] | 2020-09-15 | [
[
"Alsuwaiket",
"Mohammed A.",
""
],
[
"Blasi",
"Anas H.",
""
],
[
"Altarawneh",
"Khawla",
""
]
] | Choosing the right and effective way to assess students is one of the most important tasks of higher education. Many studies have shown that students tend to receive higher scores during their studies when assessed by different study methods which include units that are fully assessed by varying the duration of study or a combination of courses and exams than by exams alone. Many Educational Data Mining studies process data in advance through traditional data extraction, including the data preparation process. In this paper, we propose a different data preparation process by investigating more than 230000 student records for the preparation of scores. The data have been processed through diverse stages in order to extract a categorical factor through which students module marks are refined during the data preparation stage. The results of this work show that students final marks should not be isolated from the nature of the enrolled module assessment methods. They must rather be investigated thoroughly and considered during EDM data preprocessing stage. More generally, educational data should not be prepared in the same way normal data are due to the differences in data sources, applications, and error types. The effect of Module Assessment Index on the prediction process using Random Forest and Naive Bayes classification techniques were investigated. It was shown that considering MAI as attribute increases the accuracy of predicting students second year averages based on their first year averages. |
2402.09845 | Maik Ender | Maik Ender and Felix Hahn and Marc Fyrbiak and Amir Moradi and
Christof Paar | JustSTART: How to Find an RSA Authentication Bypass on Xilinx
UltraScale(+) with Fuzzing | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Fuzzing is a well-established technique in the software domain to uncover
bugs and vulnerabilities. Yet, applications of fuzzing for security
vulnerabilities in hardware systems are scarce, as principal reasons are
requirements for design information access (HDL source code). Moreover,
observation of internal hardware state during runtime is typically an
ineffective information source, as its documentation is often not publicly
available. In addition, such observation during runtime is also inefficient due
to bandwidth-limited analysis interfaces (JTAG, and minimal introspection of
internal modules). In this work, we investigate fuzzing for 7-Series and
UltraScale(+) FPGA configuration engines, the control plane governing the
(secure) bitstream configuration within the FPGA. Our goal is to examine the
effectiveness of fuzzing to analyze and document the opaque inner workings of
FPGA configuration engines, with a primary emphasis on identifying security
vulnerabilities. Using only the publicly available chip and dispersed
documentation, we first design and implement ConFuzz, an advanced FPGA
configuration engine fuzzing and rapid prototyping framework. Based on our
detailed understanding of the bitstream file format, we then systematically
define 3 novel key fuzzing strategies for Xilinx configuration engines.
Moreover, our strategies are executed through mutational structure-aware
fuzzers and incorporate various novel custom-tailored, FPGA-specific
optimizations. Our evaluation reveals previously undocumented behavior within
the configuration engine, including critical findings such as system crashes
leading to unresponsive states of the FPGA. In addition, our investigations not
only lead to the rediscovery of the starbleed attack but also uncover JustSTART
(CVE-2023-20570), capable of circumventing RSA authentication for Xilinx
UltraScale(+). Note that we also discuss countermeasures.
| [
{
"created": "Thu, 15 Feb 2024 10:03:35 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Ender",
"Maik",
""
],
[
"Hahn",
"Felix",
""
],
[
"Fyrbiak",
"Marc",
""
],
[
"Moradi",
"Amir",
""
],
[
"Paar",
"Christof",
""
]
] | Fuzzing is a well-established technique in the software domain to uncover bugs and vulnerabilities. Yet, applications of fuzzing for security vulnerabilities in hardware systems are scarce, as principal reasons are requirements for design information access (HDL source code). Moreover, observation of internal hardware state during runtime is typically an ineffective information source, as its documentation is often not publicly available. In addition, such observation during runtime is also inefficient due to bandwidth-limited analysis interfaces (JTAG, and minimal introspection of internal modules). In this work, we investigate fuzzing for 7-Series and UltraScale(+) FPGA configuration engines, the control plane governing the (secure) bitstream configuration within the FPGA. Our goal is to examine the effectiveness of fuzzing to analyze and document the opaque inner workings of FPGA configuration engines, with a primary emphasis on identifying security vulnerabilities. Using only the publicly available chip and dispersed documentation, we first design and implement ConFuzz, an advanced FPGA configuration engine fuzzing and rapid prototyping framework. Based on our detailed understanding of the bitstream file format, we then systematically define 3 novel key fuzzing strategies for Xilinx configuration engines. Moreover, our strategies are executed through mutational structure-aware fuzzers and incorporate various novel custom-tailored, FPGA-specific optimizations. Our evaluation reveals previously undocumented behavior within the configuration engine, including critical findings such as system crashes leading to unresponsive states of the FPGA. In addition, our investigations not only lead to the rediscovery of the starbleed attack but also uncover JustSTART (CVE-2023-20570), capable of circumventing RSA authentication for Xilinx UltraScale(+). Note that we also discuss countermeasures. |
1103.3099 | Rahul Urgaonkar | Rahul Urgaonkar, Bhuvan Urgaonkar, Michael J. Neely, Anand
Sivasubramaniam | Optimal Power Cost Management Using Stored Energy in Data Centers | Full version of Sigmetrics 2011 paper | null | null | null | cs.PF cs.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the electricity bill of a data center constitutes a significant portion
of its overall operational costs, reducing this has become important. We
investigate cost reduction opportunities that arise by the use of uninterrupted
power supply (UPS) units as energy storage devices. This represents a deviation
from the usual use of these devices as mere transitional fail-over mechanisms
between utility and captive sources such as diesel generators. We consider the
problem of opportunistically using these devices to reduce the time average
electric utility bill in a data center. Using the technique of Lyapunov
optimization, we develop an online control algorithm that can optimally exploit
these devices to minimize the time average cost. This algorithm operates
without any knowledge of the statistics of the workload or electricity cost
processes, making it attractive in the presence of workload and pricing
uncertainties. An interesting feature of our algorithm is that its deviation
from optimality reduces as the storage capacity is increased. Our work opens up
a new area in data center power management.
| [
{
"created": "Wed, 16 Mar 2011 05:37:18 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Mar 2011 12:58:09 GMT",
"version": "v2"
}
] | 2011-03-22 | [
[
"Urgaonkar",
"Rahul",
""
],
[
"Urgaonkar",
"Bhuvan",
""
],
[
"Neely",
"Michael J.",
""
],
[
"Sivasubramaniam",
"Anand",
""
]
] | Since the electricity bill of a data center constitutes a significant portion of its overall operational costs, reducing this has become important. We investigate cost reduction opportunities that arise by the use of uninterrupted power supply (UPS) units as energy storage devices. This represents a deviation from the usual use of these devices as mere transitional fail-over mechanisms between utility and captive sources such as diesel generators. We consider the problem of opportunistically using these devices to reduce the time average electric utility bill in a data center. Using the technique of Lyapunov optimization, we develop an online control algorithm that can optimally exploit these devices to minimize the time average cost. This algorithm operates without any knowledge of the statistics of the workload or electricity cost processes, making it attractive in the presence of workload and pricing uncertainties. An interesting feature of our algorithm is that its deviation from optimality reduces as the storage capacity is increased. Our work opens up a new area in data center power management. |
2011.13614 | Shanshan Wang | Kehan Qi, Yu Gong, Xinfeng Liu, Xin Liu, Hairong Zheng, Shanshan Wang | Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted
Deep Learning | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Noises, artifacts, and loss of information caused by the magnetic resonance
(MR) reconstruction may compromise the final performance of the downstream
applications. In this paper, we develop a re-weighted multi-task deep learning
method to learn prior knowledge from the existing big dataset and then utilize
them to assist simultaneous MR reconstruction and segmentation from the
under-sampled k-space data. The multi-task deep learning framework is equipped
with two network sub-modules, which are integrated and trained by our designed
iterative teacher forcing scheme (ITFS) under the dynamic re-weighted loss
constraint (DRLC). The ITFS is designed to avoid error accumulation by
injecting the fully-sampled data into the training process. The DRLC is
proposed to dynamically balance the contributions from the reconstruction and
segmentation sub-modules so as to co-prompt the multi-task accuracy. The
proposed method has been evaluated on two open datasets and one in vivo
in-house dataset and compared to six state-of-the-art methods. Results show
that the proposed method possesses encouraging capabilities for simultaneous
and accurate MR reconstruction and segmentation.
| [
{
"created": "Fri, 27 Nov 2020 09:08:05 GMT",
"version": "v1"
}
] | 2020-11-30 | [
[
"Qi",
"Kehan",
""
],
[
"Gong",
"Yu",
""
],
[
"Liu",
"Xinfeng",
""
],
[
"Liu",
"Xin",
""
],
[
"Zheng",
"Hairong",
""
],
[
"Wang",
"Shanshan",
""
]
] | Noises, artifacts, and loss of information caused by the magnetic resonance (MR) reconstruction may compromise the final performance of the downstream applications. In this paper, we develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset and then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data. The multi-task deep learning framework is equipped with two network sub-modules, which are integrated and trained by our designed iterative teacher forcing scheme (ITFS) under the dynamic re-weighted loss constraint (DRLC). The ITFS is designed to avoid error accumulation by injecting the fully-sampled data into the training process. The DRLC is proposed to dynamically balance the contributions from the reconstruction and segmentation sub-modules so as to co-prompt the multi-task accuracy. The proposed method has been evaluated on two open datasets and one in vivo in-house dataset and compared to six state-of-the-art methods. Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation. |
2207.05959 | Jianghong Ma | Tianjun Wei, Jianghong Ma, Tommy W. S. Chow | Fine-tuning Partition-aware Item Similarities for Efficient and Scalable
Recommendation | Accepted by The 2023 ACM Web Conference (WWW 2023) | null | 10.1145/3543507.3583240 | null | cs.IR cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Collaborative filtering (CF) is widely searched in recommendation with
various types of solutions. Recent success of Graph Convolution Networks (GCN)
in CF demonstrates the effectiveness of modeling high-order relationships
through graphs, while repetitive graph convolution and iterative batch
optimization limit their efficiency. Instead, item similarity models attempt to
construct direct relationships through efficient interaction encoding. Despite
their great performance, the growing item numbers result in quadratic growth in
similarity modeling process, posing critical scalability problems. In this
paper, we investigate the graph sampling strategy adopted in latest GCN model
for efficiency improving, and identify the potential item group structure in
the sampled graph. Based on this, we propose a novel item similarity model
which introduces graph partitioning to restrict the item similarity modeling
within each partition. Specifically, we show that the spectral information of
the original graph is well in preserving global-level information. Then, it is
added to fine-tune local item similarities with a new data augmentation
strategy acted as partition-aware prior knowledge, jointly to cope with the
information loss brought by partitioning. Experiments carried out on 4 datasets
show that the proposed model outperforms state-of-the-art GCN models with 10x
speed-up and item similarity models with 95\% parameter storage savings.
| [
{
"created": "Wed, 13 Jul 2022 04:37:48 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Feb 2023 07:10:42 GMT",
"version": "v2"
}
] | 2023-02-13 | [
[
"Wei",
"Tianjun",
""
],
[
"Ma",
"Jianghong",
""
],
[
"Chow",
"Tommy W. S.",
""
]
] | Collaborative filtering (CF) is widely searched in recommendation with various types of solutions. Recent success of Graph Convolution Networks (GCN) in CF demonstrates the effectiveness of modeling high-order relationships through graphs, while repetitive graph convolution and iterative batch optimization limit their efficiency. Instead, item similarity models attempt to construct direct relationships through efficient interaction encoding. Despite their great performance, the growing item numbers result in quadratic growth in similarity modeling process, posing critical scalability problems. In this paper, we investigate the graph sampling strategy adopted in latest GCN model for efficiency improving, and identify the potential item group structure in the sampled graph. Based on this, we propose a novel item similarity model which introduces graph partitioning to restrict the item similarity modeling within each partition. Specifically, we show that the spectral information of the original graph is well in preserving global-level information. Then, it is added to fine-tune local item similarities with a new data augmentation strategy acted as partition-aware prior knowledge, jointly to cope with the information loss brought by partitioning. Experiments carried out on 4 datasets show that the proposed model outperforms state-of-the-art GCN models with 10x speed-up and item similarity models with 95\% parameter storage savings. |
1310.2322 | Morgan Chopin | Janka Chleb\'ikov\'a and Morgan Chopin | The Firefighter Problem: A Structural Analysis | null | null | null | null | cs.DM cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the complexity of the firefighter problem where b>=1 firefighters
are available at each time step. This problem is proved NP-complete even on
trees of degree at most three and budget one (Finbow et al.,2007) and on trees
of bounded degree b+3 for any fixed budget b>=2 (Bazgan et al.,2012). In this
paper, we provide further insight into the complexity landscape of the problem
by showing that the pathwidth and the maximum degree of the input graph govern
its complexity. More precisely, we first prove that the problem is NP-complete
even on trees of pathwidth at most three for any fixed budget b>=1. We then
show that the problem turns out to be fixed parameter-tractable with respect to
the combined parameter "pathwidth" and "maximum degree" of the input graph.
| [
{
"created": "Wed, 9 Oct 2013 01:39:10 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Apr 2014 07:49:59 GMT",
"version": "v2"
}
] | 2014-04-29 | [
[
"Chlebíková",
"Janka",
""
],
[
"Chopin",
"Morgan",
""
]
] | We consider the complexity of the firefighter problem where b>=1 firefighters are available at each time step. This problem is proved NP-complete even on trees of degree at most three and budget one (Finbow et al.,2007) and on trees of bounded degree b+3 for any fixed budget b>=2 (Bazgan et al.,2012). In this paper, we provide further insight into the complexity landscape of the problem by showing that the pathwidth and the maximum degree of the input graph govern its complexity. More precisely, we first prove that the problem is NP-complete even on trees of pathwidth at most three for any fixed budget b>=1. We then show that the problem turns out to be fixed parameter-tractable with respect to the combined parameter "pathwidth" and "maximum degree" of the input graph. |
2308.02043 | Zulqarnain Rashid Dr | Zulqarnain Rashid, Amos A Folarin, Yatharth Ranjan, Pauline Conde,
Heet Sankesara, Yuezhou Zhang, Shaoxiong Sun, Callum Stewart, Petroula Laiou,
Richard JB Dobson | Disease Insight through Digital Biomarkers Developed by Remotely
Collected Wearables and Smartphone Data | null | null | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Digital Biomarkers and remote patient monitoring can provide valuable and
timely insights into how a patient is coping with their condition (disease
progression, treatment response, etc.), complementing treatment in traditional
healthcare settings.Smartphones with embedded and connected sensors have
immense potential for improving healthcare through various apps and mHealth
(mobile health) platforms. This capability could enable the development of
reliable digital biomarkers from long-term longitudinal data collected remotely
from patients. We built an open-source platform, RADAR-base, to support
large-scale data collection in remote monitoring studies. RADAR-base is a
modern remote data collection platform built around Confluent's Apache Kafka,
to support scalability, extensibility, security, privacy and quality of data.
It provides support for study design and set-up, active (eg PROMs) and passive
(eg. phone sensors, wearable devices and IoT) remote data collection
capabilities with feature generation (eg. behavioural, environmental and
physiological markers). The backend enables secure data transmission, and
scalable solutions for data storage, management and data access. The platform
has successfully collected longitudinal data for various cohorts in a number of
disease areas including Multiple Sclerosis, Depression, Epilepsy, ADHD,
Alzheimer, Autism and Lung diseases. Digital biomarkers developed through
collected data are providing useful insights into different diseases.
RADAR-base provides a modern open-source, community-driven solution for remote
monitoring, data collection, and digital phenotyping of physical and mental
health diseases. Clinicians can use digital biomarkers to augment their
decision making for the prevention, personalisation and early intervention of
disease.
| [
{
"created": "Thu, 3 Aug 2023 22:44:48 GMT",
"version": "v1"
}
] | 2023-08-07 | [
[
"Rashid",
"Zulqarnain",
""
],
[
"Folarin",
"Amos A",
""
],
[
"Ranjan",
"Yatharth",
""
],
[
"Conde",
"Pauline",
""
],
[
"Sankesara",
"Heet",
""
],
[
"Zhang",
"Yuezhou",
""
],
[
"Sun",
"Shaoxiong",
""
],
[
"Stewart",
"Callum",
""
],
[
"Laiou",
"Petroula",
""
],
[
"Dobson",
"Richard JB",
""
]
] | Digital Biomarkers and remote patient monitoring can provide valuable and timely insights into how a patient is coping with their condition (disease progression, treatment response, etc.), complementing treatment in traditional healthcare settings.Smartphones with embedded and connected sensors have immense potential for improving healthcare through various apps and mHealth (mobile health) platforms. This capability could enable the development of reliable digital biomarkers from long-term longitudinal data collected remotely from patients. We built an open-source platform, RADAR-base, to support large-scale data collection in remote monitoring studies. RADAR-base is a modern remote data collection platform built around Confluent's Apache Kafka, to support scalability, extensibility, security, privacy and quality of data. It provides support for study design and set-up, active (eg PROMs) and passive (eg. phone sensors, wearable devices and IoT) remote data collection capabilities with feature generation (eg. behavioural, environmental and physiological markers). The backend enables secure data transmission, and scalable solutions for data storage, management and data access. The platform has successfully collected longitudinal data for various cohorts in a number of disease areas including Multiple Sclerosis, Depression, Epilepsy, ADHD, Alzheimer, Autism and Lung diseases. Digital biomarkers developed through collected data are providing useful insights into different diseases. RADAR-base provides a modern open-source, community-driven solution for remote monitoring, data collection, and digital phenotyping of physical and mental health diseases. Clinicians can use digital biomarkers to augment their decision making for the prevention, personalisation and early intervention of disease. |
2005.00842 | Tatsuki Kuribayashi | Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui | Language Models as an Alternative Evaluator of Word Order Hypotheses: A
Case Study in Japanese | Accepted by ACL2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine a methodology using neural language models (LMs) for analyzing the
word order of language. This LM-based method has the potential to overcome the
difficulties existing methods face, such as the propagation of preprocessor
errors in count-based methods. In this study, we explore whether the LM-based
method is valid for analyzing the word order. As a case study, this study
focuses on Japanese due to its complex and flexible word order. To validate the
LM-based method, we test (i) parallels between LMs and human word order
preference, and (ii) consistency of the results obtained using the LM-based
method with previous linguistic studies. Through our experiments, we
tentatively conclude that LMs display sufficient word order knowledge for usage
as an analysis tool. Finally, using the LM-based method, we demonstrate the
relationship between the canonical word order and topicalization, which had yet
to be analyzed by large-scale experiments.
| [
{
"created": "Sat, 2 May 2020 14:32:40 GMT",
"version": "v1"
}
] | 2020-05-05 | [
[
"Kuribayashi",
"Tatsuki",
""
],
[
"Ito",
"Takumi",
""
],
[
"Suzuki",
"Jun",
""
],
[
"Inui",
"Kentaro",
""
]
] | We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LM-based method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LM-based method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by large-scale experiments. |
2212.11725 | Fabrice Rossi | Aichetou Bouchareb (SAMM), Marc Boull\'e, Fabrice Cl\'erot, Fabrice
Rossi (CEREMADE) | Model Based Co-clustering of Mixed Numerical and Binary Data | null | Advances in Knowledge Discovery and Management, 834, Springer
International Publishing, pp.3-22, 2019, Studies in Computational
Intelligence | 10.1007/978-3-030-18129-1_1 | null | cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-clustering is a data mining technique used to extract the underlying block
structure between the rows and columns of a data matrix. Many approaches have
been studied and have shown their capacity to extract such structures in
continuous, binary or contingency tables. However, very little work has been
done to perform co-clustering on mixed type data. In this article, we extend
the latent block models based co-clustering to the case of mixed data
(continuous and binary variables). We then evaluate the effectiveness of the
proposed approach on simulated data and we discuss its advantages and potential
limits.
| [
{
"created": "Thu, 22 Dec 2022 14:16:08 GMT",
"version": "v1"
}
] | 2022-12-23 | [
[
"Bouchareb",
"Aichetou",
"",
"SAMM"
],
[
"Boullé",
"Marc",
"",
"CEREMADE"
],
[
"Clérot",
"Fabrice",
"",
"CEREMADE"
],
[
"Rossi",
"Fabrice",
"",
"CEREMADE"
]
] | Co-clustering is a data mining technique used to extract the underlying block structure between the rows and columns of a data matrix. Many approaches have been studied and have shown their capacity to extract such structures in continuous, binary or contingency tables. However, very little work has been done to perform co-clustering on mixed type data. In this article, we extend the latent block models based co-clustering to the case of mixed data (continuous and binary variables). We then evaluate the effectiveness of the proposed approach on simulated data and we discuss its advantages and potential limits. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.