id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1802.05844 | Kui Yu | Kui Yu, Lin Liu, and Jiuyong Li | A Unified View of Causal and Non-causal Feature Selection | null | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to develop a unified view of causal and non-causal
feature selection methods. The unified view will fill in the gap in the
research of the relation between the two types of methods. Based on the
Bayesian network framework and information theory, we first show that causal
and non-causal feature selection methods share the same objective. That is to
find the Markov blanket of a class attribute, the theoretically optimal feature
set for classification. We then examine the assumptions made by causal and
non-causal feature selection methods when searching for the optimal feature
set, and unify the assumptions by mapping them to the restrictions on the
structure of the Bayesian network model of the studied problem. We further
analyze in detail how the structural assumptions lead to the different levels
of approximations employed by the methods in their search, which then result in
the approximations in the feature sets found by the methods with respect to the
optimal feature set. With the unified view, we are able to interpret the output
of non-causal methods from a causal perspective and derive the error bounds of
both types of methods. Finally, we present practical understanding of the
relation between causal and non-causal methods using extensive experiments with
synthetic data and various types of real-word data.
| [
{
"created": "Fri, 16 Feb 2018 06:18:06 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2018 23:49:40 GMT",
"version": "v2"
},
{
"created": "Wed, 23 May 2018 06:38:53 GMT",
"version": "v3"
},
{
"created": "Sun, 16 Dec 2018 03:45:56 GMT",
"version": "v4"
}
] | 2018-12-18 | [
[
"Yu",
"Kui",
""
],
[
"Liu",
"Lin",
""
],
[
"Li",
"Jiuyong",
""
]
] | In this paper, we aim to develop a unified view of causal and non-causal feature selection methods. The unified view will fill in the gap in the research of the relation between the two types of methods. Based on the Bayesian network framework and information theory, we first show that causal and non-causal feature selection methods share the same objective. That is to find the Markov blanket of a class attribute, the theoretically optimal feature set for classification. We then examine the assumptions made by causal and non-causal feature selection methods when searching for the optimal feature set, and unify the assumptions by mapping them to the restrictions on the structure of the Bayesian network model of the studied problem. We further analyze in detail how the structural assumptions lead to the different levels of approximations employed by the methods in their search, which then result in the approximations in the feature sets found by the methods with respect to the optimal feature set. With the unified view, we are able to interpret the output of non-causal methods from a causal perspective and derive the error bounds of both types of methods. Finally, we present practical understanding of the relation between causal and non-causal methods using extensive experiments with synthetic data and various types of real-word data. |
2203.13993 | Xiaomeng Li | Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, Xiaomeng Li | RSCFed: Random Sampling Consensus Federated Semi-supervised Learning | CVPR 2022, code: https://github.com/XMed-Lab/RSCFed | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated semi-supervised learning (FSSL) aims to derive a global model by
training fully-labeled and fully-unlabeled clients or training partially
labeled clients. The existing approaches work well when local clients have
independent and identically distributed (IID) data but fail to generalize to a
more practical FSSL setting, i.e., Non-IID setting. In this paper, we present a
Random Sampling Consensus Federated learning, namely RSCFed, by considering the
uneven reliability among models from fully-labeled clients, fully-unlabeled
clients or partially labeled clients. Our key motivation is that given models
with large deviations from either labeled clients or unlabeled clients, the
consensus could be reached by performing random sub-sampling over clients. To
achieve it, instead of directly aggregating local models, we first distill
several sub-consensus models by random sub-sampling over clients and then
aggregating the sub-consensus models to the global model. To enhance the
robustness of sub-consensus models, we also develop a novel distance-reweighted
model aggregation method. Experimental results show that our method outperforms
state-of-the-art methods on three benchmarked datasets, including both natural
and medical images. The code is available at
https://github.com/XMed-Lab/RSCFed.
| [
{
"created": "Sat, 26 Mar 2022 05:10:44 GMT",
"version": "v1"
}
] | 2022-03-29 | [
[
"Liang",
"Xiaoxiao",
""
],
[
"Lin",
"Yiqun",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Zhu",
"Lei",
""
],
[
"Li",
"Xiaomeng",
""
]
] | Federated semi-supervised learning (FSSL) aims to derive a global model by training fully-labeled and fully-unlabeled clients or training partially labeled clients. The existing approaches work well when local clients have independent and identically distributed (IID) data but fail to generalize to a more practical FSSL setting, i.e., Non-IID setting. In this paper, we present a Random Sampling Consensus Federated learning, namely RSCFed, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients. Our key motivation is that given models with large deviations from either labeled clients or unlabeled clients, the consensus could be reached by performing random sub-sampling over clients. To achieve it, instead of directly aggregating local models, we first distill several sub-consensus models by random sub-sampling over clients and then aggregating the sub-consensus models to the global model. To enhance the robustness of sub-consensus models, we also develop a novel distance-reweighted model aggregation method. Experimental results show that our method outperforms state-of-the-art methods on three benchmarked datasets, including both natural and medical images. The code is available at https://github.com/XMed-Lab/RSCFed. |
2205.14550 | Swapnil Sayan Saha | Swapnil Sayan Saha, Sandeep Singh Sandha, Mani Srivastava | Machine Learning for Microcontroller-Class Hardware: A Review | Published in IEEE Sensors Journal. Cite this as: S. S. Saha, S. S.
Sandha and M. Srivastava, "Machine Learning for Microcontroller-Class
Hardware: A Review," in IEEE Sensors Journal, vol. 22, no. 22, pp.
21362-21390, 15 Nov., 2022 | IEEE Sensors Journal, vol. 22, no. 22, pp. 21362-21390, 15 Nov.,
2022 | 10.1109/JSEN.2022.3210773 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The advancements in machine learning opened a new opportunity to bring
intelligence to the low-end Internet-of-Things nodes such as microcontrollers.
Conventional machine learning deployment has high memory and compute footprint
hindering their direct deployment on ultra resource-constrained
microcontrollers. This paper highlights the unique requirements of enabling
onboard machine learning for microcontroller class devices. Researchers use a
specialized model development workflow for resource-limited applications to
ensure the compute and latency budget is within the device limits while still
maintaining the desired performance. We characterize a closed-loop widely
applicable workflow of machine learning model development for microcontroller
class devices and show that several classes of applications adopt a specific
instance of it. We present both qualitative and numerical insights into
different stages of model development by showcasing several use cases. Finally,
we identify the open research challenges and unsolved questions demanding
careful considerations moving forward.
| [
{
"created": "Sun, 29 May 2022 00:59:38 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Jun 2022 15:50:30 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Jul 2022 04:52:56 GMT",
"version": "v3"
},
{
"created": "Wed, 16 Nov 2022 19:03:29 GMT",
"version": "v4"
},
{
"created": "Tue, 20 Dec 2022 20:52:55 GMT",
"version": "v5"
}
] | 2022-12-22 | [
[
"Saha",
"Swapnil Sayan",
""
],
[
"Sandha",
"Sandeep Singh",
""
],
[
"Srivastava",
"Mani",
""
]
] | The advancements in machine learning opened a new opportunity to bring intelligence to the low-end Internet-of-Things nodes such as microcontrollers. Conventional machine learning deployment has high memory and compute footprint hindering their direct deployment on ultra resource-constrained microcontrollers. This paper highlights the unique requirements of enabling onboard machine learning for microcontroller class devices. Researchers use a specialized model development workflow for resource-limited applications to ensure the compute and latency budget is within the device limits while still maintaining the desired performance. We characterize a closed-loop widely applicable workflow of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance of it. We present both qualitative and numerical insights into different stages of model development by showcasing several use cases. Finally, we identify the open research challenges and unsolved questions demanding careful considerations moving forward. |
1604.08080 | Germ\'an Andr\'es Delbianco | Germ\'an Andr\'es Delbianco, Ilya Sergey, Aleksandar Nanevski and
Anindya Banerjee | Concurrent Data Structures Linked in Time | null | null | null | null | cs.LO cs.DC cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Arguments about correctness of a concurrent data structure are typically
carried out by using the notion of linearizability and specifying the
linearization points of the data structure's procedures. Such arguments are
often cumbersome as the linearization points' position in time can be dynamic
(depend on the interference, run-time values and events from the past, or even
future), non-local (appear in procedures other than the one considered), and
whose position in the execution trace may only be determined after the
considered procedure has already terminated.
In this paper we propose a new method, based on a separation-style logic, for
reasoning about concurrent objects with such linearization points. We embrace
the dynamic nature of linearization points, and encode it as part of the data
structure's auxiliary state, so that it can be dynamically modified in place by
auxiliary code, as needed when some appropriate run-time event occurs. We name
the idea linking-in-time, because it reduces temporal reasoning to spatial
reasoning. For example, modifying a temporal position of a linearization point
can be modeled similarly to a pointer update in separation logic. Furthermore,
the auxiliary state provides a convenient way to concisely express the
properties essential for reasoning about clients of such concurrent objects. We
illustrate the method by verifying (mechanically in Coq) an intricate optimal
snapshot algorithm due to Jayanti, as well as some clients.
| [
{
"created": "Wed, 27 Apr 2016 14:13:46 GMT",
"version": "v1"
},
{
"created": "Tue, 3 May 2016 00:08:37 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Oct 2016 17:22:22 GMT",
"version": "v3"
},
{
"created": "Wed, 18 Jan 2017 13:23:29 GMT",
"version": "v4"
}
] | 2017-01-19 | [
[
"Delbianco",
"Germán Andrés",
""
],
[
"Sergey",
"Ilya",
""
],
[
"Nanevski",
"Aleksandar",
""
],
[
"Banerjee",
"Anindya",
""
]
] | Arguments about correctness of a concurrent data structure are typically carried out by using the notion of linearizability and specifying the linearization points of the data structure's procedures. Such arguments are often cumbersome as the linearization points' position in time can be dynamic (depend on the interference, run-time values and events from the past, or even future), non-local (appear in procedures other than the one considered), and whose position in the execution trace may only be determined after the considered procedure has already terminated. In this paper we propose a new method, based on a separation-style logic, for reasoning about concurrent objects with such linearization points. We embrace the dynamic nature of linearization points, and encode it as part of the data structure's auxiliary state, so that it can be dynamically modified in place by auxiliary code, as needed when some appropriate run-time event occurs. We name the idea linking-in-time, because it reduces temporal reasoning to spatial reasoning. For example, modifying a temporal position of a linearization point can be modeled similarly to a pointer update in separation logic. Furthermore, the auxiliary state provides a convenient way to concisely express the properties essential for reasoning about clients of such concurrent objects. We illustrate the method by verifying (mechanically in Coq) an intricate optimal snapshot algorithm due to Jayanti, as well as some clients. |
1904.10176 | Songlin Xu | Songlin Xu and Jiacheng Zhu | Estimating Risk Levels of Driving Scenarios through Analysis of Driving
Styles for Autonomous Vehicles | null | null | null | null | cs.RO cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to operate safely on the road, autonomous vehicles need not only to
be able to identify objects in front of them, but also to be able to estimate
the risk level of the object in front of the vehicle automatically. It is
obvious that different objects have different levels of danger to autonomous
vehicles. An evaluation system is needed to automatically determine the danger
level of the object for the autonomous vehicle. It would be too subjective and
incomplete if the system were completely defined by humans. Based on this, we
propose a framework based on nonparametric Bayesian learning method -- a sticky
hierarchical Dirichlet process hidden Markov model(sticky HDP-HMM), and
discover the relationship between driving scenarios and driving styles. We use
the analysis of driving styles of autonomous vehicles to reflect the risk
levels of driving scenarios to the vehicles. In this framework, we firstly use
sticky HDP-HMM to extract driving styles from the dataset and get different
clusters, then an evaluation system is proposed to evaluate and rank the
urgency levels of the clusters. Finally, we map the driving scenarios to the
ranking results and thus get clusters of driving scenarios in different risk
levels. More importantly, we find the relationship between driving scenarios
and driving styles. The experiment shows that our framework can cluster and
rank driving styles of different urgency levels and find the relationship
between driving scenarios and driving styles and the conclusions also fit
people's common sense when driving. Furthermore, this framework can be used for
autonomous vehicles to estimate risk levels of driving scenarios and help them
make precise and safe decisions.
| [
{
"created": "Tue, 23 Apr 2019 06:55:48 GMT",
"version": "v1"
}
] | 2019-04-24 | [
[
"Xu",
"Songlin",
""
],
[
"Zhu",
"Jiacheng",
""
]
] | In order to operate safely on the road, autonomous vehicles need not only to be able to identify objects in front of them, but also to be able to estimate the risk level of the object in front of the vehicle automatically. It is obvious that different objects have different levels of danger to autonomous vehicles. An evaluation system is needed to automatically determine the danger level of the object for the autonomous vehicle. It would be too subjective and incomplete if the system were completely defined by humans. Based on this, we propose a framework based on nonparametric Bayesian learning method -- a sticky hierarchical Dirichlet process hidden Markov model(sticky HDP-HMM), and discover the relationship between driving scenarios and driving styles. We use the analysis of driving styles of autonomous vehicles to reflect the risk levels of driving scenarios to the vehicles. In this framework, we firstly use sticky HDP-HMM to extract driving styles from the dataset and get different clusters, then an evaluation system is proposed to evaluate and rank the urgency levels of the clusters. Finally, we map the driving scenarios to the ranking results and thus get clusters of driving scenarios in different risk levels. More importantly, we find the relationship between driving scenarios and driving styles. The experiment shows that our framework can cluster and rank driving styles of different urgency levels and find the relationship between driving scenarios and driving styles and the conclusions also fit people's common sense when driving. Furthermore, this framework can be used for autonomous vehicles to estimate risk levels of driving scenarios and help them make precise and safe decisions. |
1304.5966 | Mile Sikic | Matija Korpar and Mile Sikic | SW# - GPU enabled exact alignments on genome scale | 3 pages, 1 figure, 1 table | null | null | null | cs.DC cs.CE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Sequence alignment is one of the oldest and the most famous problems in
bioinformatics. Even after 45 years, for one reason or another, this problem is
still actual; current solutions are trade-offs between execution time, memory
consumption and accuracy. We purpose SW#, a new CUDA GPU enabled and memory
efficient implementation of dynamic programming algorithms for local alignment.
In this implementation indels are treated using the affine gap model. Although
there are other GPU implementations of the Smith-Waterman algorithm, SW# is the
only publicly available implementation that can produce sequence alignments on
genome-wide scale. For long sequences, our implementation is at least a few
hundred times faster than a CPU version of the same algorithm.
| [
{
"created": "Mon, 22 Apr 2013 14:40:15 GMT",
"version": "v1"
}
] | 2013-04-23 | [
[
"Korpar",
"Matija",
""
],
[
"Sikic",
"Mile",
""
]
] | Sequence alignment is one of the oldest and the most famous problems in bioinformatics. Even after 45 years, for one reason or another, this problem is still actual; current solutions are trade-offs between execution time, memory consumption and accuracy. We purpose SW#, a new CUDA GPU enabled and memory efficient implementation of dynamic programming algorithms for local alignment. In this implementation indels are treated using the affine gap model. Although there are other GPU implementations of the Smith-Waterman algorithm, SW# is the only publicly available implementation that can produce sequence alignments on genome-wide scale. For long sequences, our implementation is at least a few hundred times faster than a CPU version of the same algorithm. |
2001.02773 | Yuqiao Chen | Yuqiao Chen, Yibo Yang, Sriraam Natarajan, Nicholas Ruozzi | Lifted Hybrid Variational Inference | AAAI 2020 Workshop on Statistical Relational AI (StarAI 2020) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of lifted inference algorithms, which exploit model symmetry to
reduce computational cost, have been proposed to render inference tractable in
probabilistic relational models. Most existing lifted inference algorithms
operate only over discrete domains or continuous domains with restricted
potential functions, e.g., Gaussian. We investigate two approximate lifted
variational approaches that are applicable to hybrid domains and expressive
enough to capture multi-modality. We demonstrate that the proposed variational
methods are both scalable and can take advantage of approximate model
symmetries, even in the presence of a large amount of continuous evidence. We
demonstrate that our approach compares favorably against existing
message-passing based approaches in a variety of settings. Finally, we present
a sufficient condition for the Bethe approximation to yield a non-trivial
estimate over the marginal polytope.
| [
{
"created": "Wed, 8 Jan 2020 22:29:07 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Feb 2020 03:13:02 GMT",
"version": "v2"
}
] | 2020-02-11 | [
[
"Chen",
"Yuqiao",
""
],
[
"Yang",
"Yibo",
""
],
[
"Natarajan",
"Sriraam",
""
],
[
"Ruozzi",
"Nicholas",
""
]
] | A variety of lifted inference algorithms, which exploit model symmetry to reduce computational cost, have been proposed to render inference tractable in probabilistic relational models. Most existing lifted inference algorithms operate only over discrete domains or continuous domains with restricted potential functions, e.g., Gaussian. We investigate two approximate lifted variational approaches that are applicable to hybrid domains and expressive enough to capture multi-modality. We demonstrate that the proposed variational methods are both scalable and can take advantage of approximate model symmetries, even in the presence of a large amount of continuous evidence. We demonstrate that our approach compares favorably against existing message-passing based approaches in a variety of settings. Finally, we present a sufficient condition for the Bethe approximation to yield a non-trivial estimate over the marginal polytope. |
1905.09275 | Nicholas Watters | Nicholas Watters, Loic Matthey, Matko Bosnjak, Christopher P. Burgess,
Alexander Lerchner | COBRA: Data-Efficient Model-Based RL through Unsupervised Object
Discovery and Curiosity-Driven Exploration | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data efficiency and robustness to task-irrelevant perturbations are
long-standing challenges for deep reinforcement learning algorithms. Here we
introduce a modular approach to addressing these challenges in a continuous
control environment, without using hand-crafted or supervised information. Our
Curious Object-Based seaRch Agent (COBRA) uses task-free intrinsically
motivated exploration and unsupervised learning to build object-based models of
its environment and action space. Subsequently, it can learn a variety of tasks
through model-based search in very few steps and excel on structured hold-out
tests of policy robustness.
| [
{
"created": "Wed, 22 May 2019 17:59:32 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2019 10:36:39 GMT",
"version": "v2"
}
] | 2019-08-15 | [
[
"Watters",
"Nicholas",
""
],
[
"Matthey",
"Loic",
""
],
[
"Bosnjak",
"Matko",
""
],
[
"Burgess",
"Christopher P.",
""
],
[
"Lerchner",
"Alexander",
""
]
] | Data efficiency and robustness to task-irrelevant perturbations are long-standing challenges for deep reinforcement learning algorithms. Here we introduce a modular approach to addressing these challenges in a continuous control environment, without using hand-crafted or supervised information. Our Curious Object-Based seaRch Agent (COBRA) uses task-free intrinsically motivated exploration and unsupervised learning to build object-based models of its environment and action space. Subsequently, it can learn a variety of tasks through model-based search in very few steps and excel on structured hold-out tests of policy robustness. |
1301.6236 | Alexander Zeh | Johan Sebastian Rosenkilde Nielsen (Technical University of Denmark),
Alexander Zeh (INT - University of Ulm., INRIA Saclay - Ile de France) | Multi-Trial Guruswami--Sudan Decoding for Generalised Reed--Solomon
Codes | WCC 2013 International Workshop on Coding and Cryptography (2013) | International Workshop on Coding and Cryptography (WCC) (2013) | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An iterated refinement procedure for the Guruswami--Sudan list decoding
algorithm for Generalised Reed--Solomon codes based on Alekhnovich's module
minimisation is proposed. The method is parametrisable and allows variants of
the usual list decoding approach. In particular, finding the list of
\emph{closest} codewords within an intermediate radius can be performed with
improved average-case complexity while retaining the worst-case complexity.
| [
{
"created": "Sat, 26 Jan 2013 11:16:40 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Jan 2013 07:47:15 GMT",
"version": "v2"
}
] | 2013-05-31 | [
[
"Nielsen",
"Johan Sebastian Rosenkilde",
"",
"Technical University of Denmark"
],
[
"Zeh",
"Alexander",
"",
"INT - University of Ulm., INRIA Saclay - Ile de France"
]
] | An iterated refinement procedure for the Guruswami--Sudan list decoding algorithm for Generalised Reed--Solomon codes based on Alekhnovich's module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list of \emph{closest} codewords within an intermediate radius can be performed with improved average-case complexity while retaining the worst-case complexity. |
2210.13635 | Hannes Westermann | Hannes Westermann, Jaromir Savelka, Vern R. Walker, Kevin D. Ashley,
Karim Benyekhlef | Toward an Intelligent Tutoring System for Argument Mining in Legal Texts | Accepted for presentation at the 35th International Conference on
Legal Knowledge and Information Systems (JURIX 2022) and publication in the
Frontiers of Artificial Intelligence and Applications series of IOS Press | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an adaptive environment (CABINET) to support caselaw analysis
(identifying key argument elements) based on a novel cognitive computing
framework that carefully matches various machine learning (ML) capabilities to
the proficiency of a user. CABINET supports law students in their learning as
well as professionals in their work. The results of our experiments focused on
the feasibility of the proposed framework are promising. We show that the
system is capable of identifying a potential error in the analysis with very
low false positives rate (2.0-3.5%), as well as of predicting the key argument
element type (e.g., an issue or a holding) with a reasonably high F1-score
(0.74).
| [
{
"created": "Mon, 24 Oct 2022 22:31:02 GMT",
"version": "v1"
}
] | 2022-10-26 | [
[
"Westermann",
"Hannes",
""
],
[
"Savelka",
"Jaromir",
""
],
[
"Walker",
"Vern R.",
""
],
[
"Ashley",
"Kevin D.",
""
],
[
"Benyekhlef",
"Karim",
""
]
] | We propose an adaptive environment (CABINET) to support caselaw analysis (identifying key argument elements) based on a novel cognitive computing framework that carefully matches various machine learning (ML) capabilities to the proficiency of a user. CABINET supports law students in their learning as well as professionals in their work. The results of our experiments focused on the feasibility of the proposed framework are promising. We show that the system is capable of identifying a potential error in the analysis with very low false positives rate (2.0-3.5%), as well as of predicting the key argument element type (e.g., an issue or a holding) with a reasonably high F1-score (0.74). |
2407.11975 | Kevin Baron | Kevin William Baron | Comparing Visual Metaphors with Textual Code For Learning Basic Computer
Science Concepts in Virtual Reality | 41 pages, 9 figures | null | null | null | cs.HC cs.MM | http://creativecommons.org/licenses/by/4.0/ | This paper represents a pilot study examining learners who are new to
computer science (CS). Subjects are taught to program in one of two virtual
reality (VR) applications developed by the researcher that use interactable
objects representing programming concepts. The different versions are the basis
for two experimental groups. One version of the app uses textual code for the
interactable programming objects and the other version uses everyday objects as
visual metaphors for the CS concepts the programming objects represent. For the
two experimental groups, the study compares the results of self-efficacy
surveys and CS knowledge tests taken before and after the VR activity
intervention. An attitudinal survey taken after the intervention examines
learners' sense of productivity and engagement with the VR activity. While
further iterations of the study with a larger sample size would be needed to
confirm any results, preliminary findings from the pilot study suggest that
both methods of teaching basic programming concepts in VR can lead to increased
levels of self-efficacy and knowledge regarding CS, and can contribute toward
productive mental states.
| [
{
"created": "Sat, 25 May 2024 07:46:43 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Baron",
"Kevin William",
""
]
] | This paper represents a pilot study examining learners who are new to computer science (CS). Subjects are taught to program in one of two virtual reality (VR) applications developed by the researcher that use interactable objects representing programming concepts. The different versions are the basis for two experimental groups. One version of the app uses textual code for the interactable programming objects and the other version uses everyday objects as visual metaphors for the CS concepts the programming objects represent. For the two experimental groups, the study compares the results of self-efficacy surveys and CS knowledge tests taken before and after the VR activity intervention. An attitudinal survey taken after the intervention examines learners' sense of productivity and engagement with the VR activity. While further iterations of the study with a larger sample size would be needed to confirm any results, preliminary findings from the pilot study suggest that both methods of teaching basic programming concepts in VR can lead to increased levels of self-efficacy and knowledge regarding CS, and can contribute toward productive mental states. |
2310.06794 | Siddhant Agarwal | Siddhant Agarwal, Ishan Durugkar, Peter Stone, Amy Zhang | $f$-Policy Gradients: A General Framework for Goal Conditioned RL using
$f$-Divergences | Accepted at NeurIPS 2023 | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Goal-Conditioned Reinforcement Learning (RL) problems often have access to
sparse rewards where the agent receives a reward signal only when it has
achieved the goal, making policy optimization a difficult problem. Several
works augment this sparse reward with a learned dense reward function, but this
can lead to sub-optimal policies if the reward is misaligned. Moreover, recent
works have demonstrated that effective shaping rewards for a particular problem
can depend on the underlying learning algorithm. This paper introduces a novel
way to encourage exploration called $f$-Policy Gradients, or $f$-PG. $f$-PG
minimizes the f-divergence between the agent's state visitation distribution
and the goal, which we show can lead to an optimal policy. We derive gradients
for various f-divergences to optimize this objective. Our learning paradigm
provides dense learning signals for exploration in sparse reward settings. We
further introduce an entropy-regularized policy optimization objective, that we
call $state$-MaxEnt RL (or $s$-MaxEnt RL) as a special case of our objective.
We show that several metric-based shaping rewards like L2 can be used with
$s$-MaxEnt RL, providing a common ground to study such metric-based shaping
rewards with efficient exploration. We find that $f$-PG has better performance
compared to standard policy gradient methods on a challenging gridworld as well
as the Point Maze and FetchReach environments. More information on our website
https://agarwalsiddhant10.github.io/projects/fpg.html.
| [
{
"created": "Tue, 10 Oct 2023 17:07:05 GMT",
"version": "v1"
}
] | 2023-10-11 | [
[
"Agarwal",
"Siddhant",
""
],
[
"Durugkar",
"Ishan",
""
],
[
"Stone",
"Peter",
""
],
[
"Zhang",
"Amy",
""
]
] | Goal-Conditioned Reinforcement Learning (RL) problems often have access to sparse rewards where the agent receives a reward signal only when it has achieved the goal, making policy optimization a difficult problem. Several works augment this sparse reward with a learned dense reward function, but this can lead to sub-optimal policies if the reward is misaligned. Moreover, recent works have demonstrated that effective shaping rewards for a particular problem can depend on the underlying learning algorithm. This paper introduces a novel way to encourage exploration called $f$-Policy Gradients, or $f$-PG. $f$-PG minimizes the f-divergence between the agent's state visitation distribution and the goal, which we show can lead to an optimal policy. We derive gradients for various f-divergences to optimize this objective. Our learning paradigm provides dense learning signals for exploration in sparse reward settings. We further introduce an entropy-regularized policy optimization objective, that we call $state$-MaxEnt RL (or $s$-MaxEnt RL) as a special case of our objective. We show that several metric-based shaping rewards like L2 can be used with $s$-MaxEnt RL, providing a common ground to study such metric-based shaping rewards with efficient exploration. We find that $f$-PG has better performance compared to standard policy gradient methods on a challenging gridworld as well as the Point Maze and FetchReach environments. More information on our website https://agarwalsiddhant10.github.io/projects/fpg.html. |
1802.10567 | Jost Tobias Springenberg | Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas
Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, Jost Tobias
Springenberg | Learning by Playing - Solving Sparse Reward Tasks from Scratch | A video of the rich set of learned behaviours can be found at
https://youtu.be/mPKyvocNe_M | null | null | null | cs.LG cs.RO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in
the context of Reinforcement Learning (RL). SAC-X enables learning of complex
behaviors - from scratch - in the presence of multiple sparse reward signals.
To this end, the agent is equipped with a set of general auxiliary tasks, that
it attempts to learn simultaneously via off-policy RL. The key idea behind our
method is that active (learned) scheduling and execution of auxiliary policies
allows the agent to efficiently explore its environment - enabling it to excel
at sparse reward RL. Our experiments in several challenging robotic
manipulation settings demonstrate the power of our approach.
| [
{
"created": "Wed, 28 Feb 2018 18:15:49 GMT",
"version": "v1"
}
] | 2018-03-01 | [
[
"Riedmiller",
"Martin",
""
],
[
"Hafner",
"Roland",
""
],
[
"Lampe",
"Thomas",
""
],
[
"Neunert",
"Michael",
""
],
[
"Degrave",
"Jonas",
""
],
[
"Van de Wiele",
"Tom",
""
],
[
"Mnih",
"Volodymyr",
""
],
[
"Heess",
"Nicolas",
""
],
[
"Springenberg",
"Jost Tobias",
""
]
] | We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach. |
2009.10272 | Shivam Handa | Shivam Handa, Martin Rinard | Inductive Program Synthesis Over Noisy Data | null | null | 10.1145/3368089.3409732 | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new framework and associated synthesis algorithms for program
synthesis over noisy data, i.e., data that may contain incorrect/corrupted
input-output examples. This framework is based on an extension of finite tree
automata called {\em weighted finite tree automata}. We show how to apply this
framework to formulate and solve a variety of program synthesis problems over
noisy data. Results from our implemented system running on problems from the
SyGuS 2018 benchmark suite highlight its ability to successfully synthesize
programs in the face of noisy data sets, including the ability to synthesize a
correct program even when every input-output example in the data set is
corrupted.
| [
{
"created": "Tue, 22 Sep 2020 01:57:48 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Oct 2020 20:23:13 GMT",
"version": "v2"
}
] | 2021-03-15 | [
[
"Handa",
"Shivam",
""
],
[
"Rinard",
"Martin",
""
]
] | We present a new framework and associated synthesis algorithms for program synthesis over noisy data, i.e., data that may contain incorrect/corrupted input-output examples. This framework is based on an extension of finite tree automata called {\em weighted finite tree automata}. We show how to apply this framework to formulate and solve a variety of program synthesis problems over noisy data. Results from our implemented system running on problems from the SyGuS 2018 benchmark suite highlight its ability to successfully synthesize programs in the face of noisy data sets, including the ability to synthesize a correct program even when every input-output example in the data set is corrupted. |
2402.03173 | Zichen Zhu | Zichen Zhu, Yang Xu, Lu Chen, Jingkai Yang, Yichuan Ma, Yiming Sun,
Hailin Wen, Jiaqi Liu, Jinyu Cai, Yingzi Ma, Situo Zhang, Zihan Zhao,
Liangtai Sun, Kai Yu | MULTI: Multimodal Understanding Leaderboard with Text and Images | 16 pages, 9 figures, 10 tables. Details and access are available at:
https://OpenDFM.github.io/MULTI-Benchmark/ | null | null | null | cs.CL cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Rapid progress in multimodal large language models (MLLMs) highlights the
need to introduce challenging yet realistic benchmarks to the academic
community, while existing benchmarks primarily focus on understanding simple
natural images and short context. In this paper, we present MULTI as a
cutting-edge benchmark for evaluating MLLMs on understanding complex tables and
images, and reasoning with long context. MULTI provides multimodal inputs and
requires responses that are either precise or open-ended, reflecting real-life
examination styles. MULTI includes over 18,000 questions and challenges MLLMs
with a variety of tasks, ranging from formula derivation to image detail
analysis and cross-modality reasoning. We also introduce MULTI-Elite, a
500-question selected hard subset, and MULTI-Extend, with more than 4,500
external knowledge context pieces. Our evaluation indicates significant
potential for MLLM advancement, with GPT-4V achieving a 63.7% accuracy rate on
MULTI, in contrast to other MLLMs scoring between 28.5% and 55.3%. MULTI serves
not only as a robust evaluation platform but also paves the way for the
development of expert-level AI.
| [
{
"created": "Mon, 5 Feb 2024 16:41:02 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 07:55:52 GMT",
"version": "v2"
}
] | 2024-02-21 | [
[
"Zhu",
"Zichen",
""
],
[
"Xu",
"Yang",
""
],
[
"Chen",
"Lu",
""
],
[
"Yang",
"Jingkai",
""
],
[
"Ma",
"Yichuan",
""
],
[
"Sun",
"Yiming",
""
],
[
"Wen",
"Hailin",
""
],
[
"Liu",
"Jiaqi",
""
],
[
"Cai",
"Jinyu",
""
],
[
"Ma",
"Yingzi",
""
],
[
"Zhang",
"Situo",
""
],
[
"Zhao",
"Zihan",
""
],
[
"Sun",
"Liangtai",
""
],
[
"Yu",
"Kai",
""
]
] | Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present MULTI as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. MULTI provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. MULTI includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a 63.7% accuracy rate on MULTI, in contrast to other MLLMs scoring between 28.5% and 55.3%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI. |
2403.17090 | Vida Dujmovic | Vida Dujmovic and Pat Morin | Free Sets in Planar Graphs: History and Applications | 31 pages | null | null | null | cs.CG cs.DM math.CO | http://creativecommons.org/licenses/by/4.0/ | A subset $S$ of vertices in a planar graph $G$ is a free set if, for every
set $P$ of $|S|$ points in the plane, there exists a straight-line
crossing-free drawing of $G$ in which vertices of $S$ are mapped to distinct
points in $P$. In this survey, we review
- several equivalent definitions of free sets, - results on the existence of
large free sets in planar graphs and subclasses of planar graphs, - and
applications of free sets in graph drawing.
The survey concludes with a list of open problems in this still very active
research area.
| [
{
"created": "Mon, 25 Mar 2024 18:25:15 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Dujmovic",
"Vida",
""
],
[
"Morin",
"Pat",
""
]
] | A subset $S$ of vertices in a planar graph $G$ is a free set if, for every set $P$ of $|S|$ points in the plane, there exists a straight-line crossing-free drawing of $G$ in which vertices of $S$ are mapped to distinct points in $P$. In this survey, we review - several equivalent definitions of free sets, - results on the existence of large free sets in planar graphs and subclasses of planar graphs, - and applications of free sets in graph drawing. The survey concludes with a list of open problems in this still very active research area. |
2205.05793 | Hjalmar Wijk | Hjalmar Wijk, Benjie Wang, Marta Kwiatkowska | Robustness Guarantees for Credal Bayesian Networks via Constraint
Relaxation over Probabilistic Circuits | 11 pages (8+3 Appendix). To be published in IJCAI 2022 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In many domains, worst-case guarantees on the performance (e.g., prediction
accuracy) of a decision function subject to distributional shifts and
uncertainty about the environment are crucial. In this work we develop a method
to quantify the robustness of decision functions with respect to credal
Bayesian networks, formal parametric models of the environment where
uncertainty is expressed through credal sets on the parameters. In particular,
we address the maximum marginal probability (MARmax) problem, that is,
determining the greatest probability of an event (such as misclassification)
obtainable for parameters in the credal set. We develop a method to faithfully
transfer the problem into a constrained optimization problem on a probabilistic
circuit. By performing a simple constraint relaxation, we show how to obtain a
guaranteed upper bound on MARmax in linear time in the size of the circuit. We
further theoretically characterize this constraint relaxation in terms of the
original Bayesian network structure, which yields insight into the tightness of
the bound. We implement the method and provide experimental evidence that the
upper bound is often near tight and demonstrates improved scalability compared
to other methods.
| [
{
"created": "Wed, 11 May 2022 22:37:07 GMT",
"version": "v1"
}
] | 2022-05-13 | [
[
"Wijk",
"Hjalmar",
""
],
[
"Wang",
"Benjie",
""
],
[
"Kwiatkowska",
"Marta",
""
]
] | In many domains, worst-case guarantees on the performance (e.g., prediction accuracy) of a decision function subject to distributional shifts and uncertainty about the environment are crucial. In this work we develop a method to quantify the robustness of decision functions with respect to credal Bayesian networks, formal parametric models of the environment where uncertainty is expressed through credal sets on the parameters. In particular, we address the maximum marginal probability (MARmax) problem, that is, determining the greatest probability of an event (such as misclassification) obtainable for parameters in the credal set. We develop a method to faithfully transfer the problem into a constrained optimization problem on a probabilistic circuit. By performing a simple constraint relaxation, we show how to obtain a guaranteed upper bound on MARmax in linear time in the size of the circuit. We further theoretically characterize this constraint relaxation in terms of the original Bayesian network structure, which yields insight into the tightness of the bound. We implement the method and provide experimental evidence that the upper bound is often near tight and demonstrates improved scalability compared to other methods. |
2311.03742 | Xinhao Xiang | Xinhao Xiang, Simon Dr\"ager, Jiawei Zhang | 3DifFusionDet: Diffusion Model for 3D Object Detection with Robust
LiDAR-Camera Fusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Good 3D object detection performance from LiDAR-Camera sensors demands
seamless feature alignment and fusion strategies. We propose the 3DifFusionDet
framework in this paper, which structures 3D object detection as a denoising
diffusion process from noisy 3D boxes to target boxes. In this framework,
ground truth boxes diffuse in a random distribution for training, and the model
learns to reverse the noising process. During inference, the model gradually
refines a set of boxes that were generated at random to the outcomes. Under the
feature align strategy, the progressive refinement method could make a
significant contribution to robust LiDAR-Camera fusion. The iterative
refinement process could also demonstrate great adaptability by applying the
framework to various detecting circumstances where varying levels of accuracy
and speed are required. Extensive experiments on KITTI, a benchmark for
real-world traffic object identification, revealed that 3DifFusionDet is able
to perform favorably in comparison to earlier, well-respected detectors.
| [
{
"created": "Tue, 7 Nov 2023 05:53:09 GMT",
"version": "v1"
}
] | 2023-11-08 | [
[
"Xiang",
"Xinhao",
""
],
[
"Dräger",
"Simon",
""
],
[
"Zhang",
"Jiawei",
""
]
] | Good 3D object detection performance from LiDAR-Camera sensors demands seamless feature alignment and fusion strategies. We propose the 3DifFusionDet framework in this paper, which structures 3D object detection as a denoising diffusion process from noisy 3D boxes to target boxes. In this framework, ground truth boxes diffuse in a random distribution for training, and the model learns to reverse the noising process. During inference, the model gradually refines a set of boxes that were generated at random to the outcomes. Under the feature align strategy, the progressive refinement method could make a significant contribution to robust LiDAR-Camera fusion. The iterative refinement process could also demonstrate great adaptability by applying the framework to various detecting circumstances where varying levels of accuracy and speed are required. Extensive experiments on KITTI, a benchmark for real-world traffic object identification, revealed that 3DifFusionDet is able to perform favorably in comparison to earlier, well-respected detectors. |
1008.3443 | Jose Ignacio Alvarez-Hamelin | Jos\'e Ignacio Alvarez-Hamelin (FIUBA, INTECIN), Beir\'o Mariano
Gast\'on (FIUBA), Jorge Rodolfo Busch (FIUBA) | On weakly optimal partitions in modular networks | null | null | null | null | cs.SI cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modularity was introduced as a measure of goodness for the community
structure induced by a partition of the set of vertices in a graph. Then, it
also became an objective function used to find good partitions, with high
success. Nevertheless, some works have shown a scaling limit and certain
instabilities when finding communities with this criterion. Modularity has been
studied proposing several formalisms, as hamiltonians in a Potts model or
laplacians in spectral partitioning. In this paper we present a new
probabilistic formalism to analyze modularity, and from it we derive an
algorithm based on weakly optimal partitions. This algorithm obtains good
quality partitions and also scales to large graphs.
| [
{
"created": "Fri, 20 Aug 2010 06:49:04 GMT",
"version": "v1"
}
] | 2010-08-25 | [
[
"Alvarez-Hamelin",
"José Ignacio",
"",
"FIUBA, INTECIN"
],
[
"Gastón",
"Beiró Mariano",
"",
"FIUBA"
],
[
"Busch",
"Jorge Rodolfo",
"",
"FIUBA"
]
] | Modularity was introduced as a measure of goodness for the community structure induced by a partition of the set of vertices in a graph. Then, it also became an objective function used to find good partitions, with high success. Nevertheless, some works have shown a scaling limit and certain instabilities when finding communities with this criterion. Modularity has been studied proposing several formalisms, as hamiltonians in a Potts model or laplacians in spectral partitioning. In this paper we present a new probabilistic formalism to analyze modularity, and from it we derive an algorithm based on weakly optimal partitions. This algorithm obtains good quality partitions and also scales to large graphs. |
2208.11235 | Colin Gordon | Sergey Matskevich, Colin S. Gordon | Preprocessing Source Code Comments for Linguistic Models | Correcting author name | null | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comments are an important part of the source code and are a primary source of
documentation. This has driven interest in using large bodies of comments to
train or evaluate tools that consume or produce them -- such as generating
oracles or even code from comments, or automatically generating code summaries.
Most of this work makes strong assumptions about the structure and quality of
comments, such as assuming they consist mostly of proper English sentences.
However, we know little about the actual quality of existing comments for these
use cases. Comments often contain unique structures and elements that are not
seen in other types of text, and filtering or extracting information from them
requires some extra care. This paper explores the contents and quality of
Python comments drawn from 840 most popular open source projects from GitHub
and 8422 projects from SriLab dataset, and the impact of na\"ive vs. in-depth
filtering can have on the use of existing comments for training and evaluation
of systems that generate comments.
| [
{
"created": "Tue, 23 Aug 2022 23:44:09 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Aug 2022 23:46:49 GMT",
"version": "v2"
}
] | 2022-08-30 | [
[
"Matskevich",
"Sergey",
""
],
[
"Gordon",
"Colin S.",
""
]
] | Comments are an important part of the source code and are a primary source of documentation. This has driven interest in using large bodies of comments to train or evaluate tools that consume or produce them -- such as generating oracles or even code from comments, or automatically generating code summaries. Most of this work makes strong assumptions about the structure and quality of comments, such as assuming they consist mostly of proper English sentences. However, we know little about the actual quality of existing comments for these use cases. Comments often contain unique structures and elements that are not seen in other types of text, and filtering or extracting information from them requires some extra care. This paper explores the contents and quality of Python comments drawn from 840 most popular open source projects from GitHub and 8422 projects from SriLab dataset, and the impact of na\"ive vs. in-depth filtering can have on the use of existing comments for training and evaluation of systems that generate comments. |
2404.16051 | Jan Martijn van der Werf | Max Lonysa Muller, Erik Saaman, Jan Martijn E. M. van der Werf,
Charles Jeurgens and Hajo A. Reijers | TimeFlows: Visualizing Process Chronologies from Vast Collections of
Heterogeneous Information Objects | 16 pages, accepted at RCIS 2024 | null | null | null | cs.HC cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In many fact-finding investigations, notably parliamentary inquiries, process
chronologies are created to reconstruct how a controversial policy or decision
came into existence. Current approaches, like timelines, lack the
expressiveness to represent the variety of relations in which historic events
may link to the overall chronology. This obfuscates the nature of the
interdependence among the events, and the texts from which they are distilled.
Based on explorative interviews with expert analysts, we propose an extended,
rich set of relationships. We describe how these can be visualized as
TimeFlows. We provide an example of such a visualization by illustrating the
Childcare Benefits Scandal -- an affair that deeply affected Dutch politics in
recent years. This work extends the scope of existing process discovery
research into the direction of unveiling non-repetitive processes from
unstructured information objects.
| [
{
"created": "Wed, 10 Apr 2024 11:08:26 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2024 19:11:49 GMT",
"version": "v2"
}
] | 2024-05-06 | [
[
"Muller",
"Max Lonysa",
""
],
[
"Saaman",
"Erik",
""
],
[
"van der Werf",
"Jan Martijn E. M.",
""
],
[
"Jeurgens",
"Charles",
""
],
[
"Reijers",
"Hajo A.",
""
]
] | In many fact-finding investigations, notably parliamentary inquiries, process chronologies are created to reconstruct how a controversial policy or decision came into existence. Current approaches, like timelines, lack the expressiveness to represent the variety of relations in which historic events may link to the overall chronology. This obfuscates the nature of the interdependence among the events, and the texts from which they are distilled. Based on explorative interviews with expert analysts, we propose an extended, rich set of relationships. We describe how these can be visualized as TimeFlows. We provide an example of such a visualization by illustrating the Childcare Benefits Scandal -- an affair that deeply affected Dutch politics in recent years. This work extends the scope of existing process discovery research into the direction of unveiling non-repetitive processes from unstructured information objects. |
1711.05354 | William Leeb | William Leeb and Vladimir Rokhlin | On the Numerical Solution of Fourth-Order Linear Two-Point Boundary
Value Problems | null | null | null | null | cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a fast and numerically stable algorithm for the
solution of fourth-order linear boundary value problems on an interval. This
type of equation arises in a variety of settings in physics and signal
processing. Our method reformulates the equation as a collection of second-kind
integral equations defined on local subdomains. Each such equation can be
stably discretized and solved. The boundary values of these local solutions are
matched by solving a banded linear system. The method of deferred corrections
is then used to increase the accuracy of the scheme. Deferred corrections
requires applying the integral operator to a function on the entire domain, for
which we provide an algorithm with linear cost. We illustrate the performance
of our method on several numerical examples.
| [
{
"created": "Tue, 14 Nov 2017 23:15:28 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Feb 2018 02:03:29 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Jan 2020 20:12:46 GMT",
"version": "v3"
}
] | 2020-01-13 | [
[
"Leeb",
"William",
""
],
[
"Rokhlin",
"Vladimir",
""
]
] | This paper introduces a fast and numerically stable algorithm for the solution of fourth-order linear boundary value problems on an interval. This type of equation arises in a variety of settings in physics and signal processing. Our method reformulates the equation as a collection of second-kind integral equations defined on local subdomains. Each such equation can be stably discretized and solved. The boundary values of these local solutions are matched by solving a banded linear system. The method of deferred corrections is then used to increase the accuracy of the scheme. Deferred corrections requires applying the integral operator to a function on the entire domain, for which we provide an algorithm with linear cost. We illustrate the performance of our method on several numerical examples. |
1902.10460 | Jiasong Wu | Jinpeng Xia, Jiasong Wu, Youyong Kong, Pinzheng Zhang, Lotfi Senhadji,
Huazhong Shu | Modulated binary cliquenet | 5 pages, 3 figures, 2 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although Convolutional Neural Networks (CNNs) achieve effectiveness in
various computer vision tasks, the significant requirement of storage of such
networks hinders the deployment on computationally limited devices. In this
paper, we propose a new compact and portable deep learning network named
Modulated Binary Cliquenet (MBCliqueNet) aiming to improve the portability of
CNNs based on binarized filters while achieving comparable performance with the
full-precision CNNs like Resnet. In MBCliqueNet, we introduce a novel modulated
operation to approximate the unbinarized filters and gives an initialization
method to speed up its convergence. We reduce the extra parameters caused by
modulated operation with parameters sharing. As a result, the proposed
MBCliqueNet can reduce the required storage space of convolutional filters by a
factor of at least 32, in contrast to the full-precision model, and achieve
better performance than other state-of-the-art binarized models. More
importantly, our model compares even better with some full-precision models
like Resnet on the dataset we used.
| [
{
"created": "Wed, 27 Feb 2019 11:14:01 GMT",
"version": "v1"
}
] | 2019-02-28 | [
[
"Xia",
"Jinpeng",
""
],
[
"Wu",
"Jiasong",
""
],
[
"Kong",
"Youyong",
""
],
[
"Zhang",
"Pinzheng",
""
],
[
"Senhadji",
"Lotfi",
""
],
[
"Shu",
"Huazhong",
""
]
] | Although Convolutional Neural Networks (CNNs) achieve effectiveness in various computer vision tasks, the significant requirement of storage of such networks hinders the deployment on computationally limited devices. In this paper, we propose a new compact and portable deep learning network named Modulated Binary Cliquenet (MBCliqueNet) aiming to improve the portability of CNNs based on binarized filters while achieving comparable performance with the full-precision CNNs like Resnet. In MBCliqueNet, we introduce a novel modulated operation to approximate the unbinarized filters and gives an initialization method to speed up its convergence. We reduce the extra parameters caused by modulated operation with parameters sharing. As a result, the proposed MBCliqueNet can reduce the required storage space of convolutional filters by a factor of at least 32, in contrast to the full-precision model, and achieve better performance than other state-of-the-art binarized models. More importantly, our model compares even better with some full-precision models like Resnet on the dataset we used. |
2312.16652 | Omar Al-Bataineh | Omar I. Al-Bataineh | Invariant-based Program Repair | Accepted for publication in the 27th International Conference on
Fundamental Approaches to Software Engineering (FASE 2024) | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | This paper describes a formal general-purpose automated program repair (APR)
framework based on the concept of program invariants. In the presented repair
framework, the execution traces of a defected program are dynamically analyzed
to infer specifications $\varphi_{correct}$ and $\varphi_{violated}$, where
$\varphi_{correct}$ represents the set of likely invariants (good patterns)
required for a run to be successful and $\varphi_{violated}$ represents the set
of likely suspicious invariants (bad patterns) that result in the bug in the
defected program. These specifications are then refined using rigorous program
analysis techniques, which are also used to drive the repair process towards
feasible patches and assess the correctness of generated patches.We demonstrate
the usefulness of leveraging invariants in APR by developing an invariant-based
repair system for performance bugs. The initial analysis shows the
effectiveness of invariant-based APR in handling performance bugs by producing
patches that ensure program's efficiency increase without adversely impacting
its functionality.
| [
{
"created": "Wed, 27 Dec 2023 17:46:19 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jan 2024 20:20:20 GMT",
"version": "v2"
}
] | 2024-01-30 | [
[
"Al-Bataineh",
"Omar I.",
""
]
] | This paper describes a formal general-purpose automated program repair (APR) framework based on the concept of program invariants. In the presented repair framework, the execution traces of a defected program are dynamically analyzed to infer specifications $\varphi_{correct}$ and $\varphi_{violated}$, where $\varphi_{correct}$ represents the set of likely invariants (good patterns) required for a run to be successful and $\varphi_{violated}$ represents the set of likely suspicious invariants (bad patterns) that result in the bug in the defected program. These specifications are then refined using rigorous program analysis techniques, which are also used to drive the repair process towards feasible patches and assess the correctness of generated patches.We demonstrate the usefulness of leveraging invariants in APR by developing an invariant-based repair system for performance bugs. The initial analysis shows the effectiveness of invariant-based APR in handling performance bugs by producing patches that ensure program's efficiency increase without adversely impacting its functionality. |
1106.5988 | Lazaros Gkatzikis | Lazaros Gkatzikis, Georgios S. Paschos and Iordanis Koutsopoulos | The impact of energy constraints on the medium access | 8 pages, 3 figures | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contemporary mobile devices are battery powered and due to their shrinking
size and increasing complexity operate on a tight energy budget. Thus, energy
consumption is becoming one of the major concerns regarding the current and
upcoming wireless communication systems. On the other hand, the available
bandwidth resources are limited and modern applications are throughput
demanding, leading thus to strong competition for the medium. In this
direction, we consider a stochastic contention based medium access scheme,
where the devices may choose to turn off for some time in order to save energy.
We perform an analysis for a slotted ALOHA scenario and we show that the energy
constraints, if properly exploited, may reduce contention for the medium. Our
results give valuable insights on the energy--throughput tradeoff for any
contention based system.
| [
{
"created": "Sun, 5 Jun 2011 21:52:15 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Gkatzikis",
"Lazaros",
""
],
[
"Paschos",
"Georgios S.",
""
],
[
"Koutsopoulos",
"Iordanis",
""
]
] | Contemporary mobile devices are battery powered and due to their shrinking size and increasing complexity operate on a tight energy budget. Thus, energy consumption is becoming one of the major concerns regarding the current and upcoming wireless communication systems. On the other hand, the available bandwidth resources are limited and modern applications are throughput demanding, leading thus to strong competition for the medium. In this direction, we consider a stochastic contention based medium access scheme, where the devices may choose to turn off for some time in order to save energy. We perform an analysis for a slotted ALOHA scenario and we show that the energy constraints, if properly exploited, may reduce contention for the medium. Our results give valuable insights on the energy--throughput tradeoff for any contention based system. |
2012.14873 | Sebastian Johann Wetzel | Sebastian J. Wetzel, Kevin Ryczko, Roger G. Melko, Isaac Tamblyn | Twin Neural Network Regression | null | null | 10.1002/ail2.78 | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce twin neural network (TNN) regression. This method predicts
differences between the target values of two different data points rather than
the targets themselves. The solution of a traditional regression problem is
then obtained by averaging over an ensemble of all predicted differences
between the targets of an unseen data point and all training data points.
Whereas ensembles are normally costly to produce, TNN regression intrinsically
creates an ensemble of predictions of twice the size of the training set while
only training a single neural network. Since ensembles have been shown to be
more accurate than single models this property naturally transfers to TNN
regression. We show that TNNs are able to compete or yield more accurate
predictions for different data sets, compared to other state-of-the-art
methods. Furthermore, TNN regression is constrained by self-consistency
conditions. We find that the violation of these conditions provides an estimate
for the prediction uncertainty.
| [
{
"created": "Tue, 29 Dec 2020 17:52:31 GMT",
"version": "v1"
}
] | 2022-12-14 | [
[
"Wetzel",
"Sebastian J.",
""
],
[
"Ryczko",
"Kevin",
""
],
[
"Melko",
"Roger G.",
""
],
[
"Tamblyn",
"Isaac",
""
]
] | We introduce twin neural network (TNN) regression. This method predicts differences between the target values of two different data points rather than the targets themselves. The solution of a traditional regression problem is then obtained by averaging over an ensemble of all predicted differences between the targets of an unseen data point and all training data points. Whereas ensembles are normally costly to produce, TNN regression intrinsically creates an ensemble of predictions of twice the size of the training set while only training a single neural network. Since ensembles have been shown to be more accurate than single models this property naturally transfers to TNN regression. We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared to other state-of-the-art methods. Furthermore, TNN regression is constrained by self-consistency conditions. We find that the violation of these conditions provides an estimate for the prediction uncertainty. |
2006.09319 | Mamikon Gulian | Laura Swiler, Mamikon Gulian, Ari Frankel, Cosmin Safta, John Jakeman | A Survey of Constrained Gaussian Process Regression: Approaches and
Implementation Challenges | 42 pages, 3 figures. Version 3: DOI & Reference added; appeared in
Journal of Machine Learning for Modeling and Computing. Version 2 includes
minor additions, clarifications and improvements to notation | Journal of Machine Learning for Modeling and Computing,
1(2):119-156 (2020) | 10.1615/JMachLearnModelComput.2020035155 | null | cs.LG math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian process regression is a popular Bayesian framework for surrogate
modeling of expensive data sources. As part of a broader effort in scientific
machine learning, many recent works have incorporated physical constraints or
other a priori information within Gaussian process regression to supplement
limited data and regularize the behavior of the model. We provide an overview
and survey of several classes of Gaussian process constraints, including
positivity or bound constraints, monotonicity and convexity constraints,
differential equation constraints provided by linear PDEs, and boundary
condition constraints. We compare the strategies behind each approach as well
as the differences in implementation, concluding with a discussion of the
computational challenges introduced by constraints.
| [
{
"created": "Tue, 16 Jun 2020 17:03:36 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Dec 2020 18:55:38 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Jan 2021 17:45:06 GMT",
"version": "v3"
}
] | 2021-01-07 | [
[
"Swiler",
"Laura",
""
],
[
"Gulian",
"Mamikon",
""
],
[
"Frankel",
"Ari",
""
],
[
"Safta",
"Cosmin",
""
],
[
"Jakeman",
"John",
""
]
] | Gaussian process regression is a popular Bayesian framework for surrogate modeling of expensive data sources. As part of a broader effort in scientific machine learning, many recent works have incorporated physical constraints or other a priori information within Gaussian process regression to supplement limited data and regularize the behavior of the model. We provide an overview and survey of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints provided by linear PDEs, and boundary condition constraints. We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints. |
2312.05459 | Venkata Raghava Kurada | Venkata Raghava Kurada, Pallava Kumar Baruah | FLoW3 -- Web3 Empowered Federated Learning | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Federated Learning is susceptible to various kinds of attacks like Data
Poisoning, Model Poisoning and Man in the Middle attack. We perceive Federated
Learning as a hierarchical structure, a federation of nodes with validators as
the head. The process of validation is done through consensus by employing
Novelty Detection and Snowball protocol, to identify valuable and relevant
updates while filtering out potentially malicious or irrelevant updates, thus
preventing Model Poisoning attacks. The opinion of the validators is recorded
in blockchain and trust score is calculated. In case of lack of consensus,
trust score is used to determine the impact of validators on the global model.
A hyperparameter is introduced to guide the model generation process, either to
rely on consensus or on trust score. This approach ensures transparency and
reliability in the aggregation process and allows the global model to benefit
from insights of most trusted nodes. In the training phase, the combination of
IPFS , PGP encryption provides : a) secure and decentralized storage b)
mitigates single point of failure making this system reliable and c) resilient
against man in the middle attack. The system is realized by implementing in
python and Foundry for smart contract development. Global Model is tested
against data poisoning by flipping the labels and by introducing malicious
nodes. Results found to be similar to that of Flower.
| [
{
"created": "Sat, 9 Dec 2023 04:05:07 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Kurada",
"Venkata Raghava",
""
],
[
"Baruah",
"Pallava Kumar",
""
]
] | Federated Learning is susceptible to various kinds of attacks like Data Poisoning, Model Poisoning and Man in the Middle attack. We perceive Federated Learning as a hierarchical structure, a federation of nodes with validators as the head. The process of validation is done through consensus by employing Novelty Detection and Snowball protocol, to identify valuable and relevant updates while filtering out potentially malicious or irrelevant updates, thus preventing Model Poisoning attacks. The opinion of the validators is recorded in blockchain and trust score is calculated. In case of lack of consensus, trust score is used to determine the impact of validators on the global model. A hyperparameter is introduced to guide the model generation process, either to rely on consensus or on trust score. This approach ensures transparency and reliability in the aggregation process and allows the global model to benefit from insights of most trusted nodes. In the training phase, the combination of IPFS , PGP encryption provides : a) secure and decentralized storage b) mitigates single point of failure making this system reliable and c) resilient against man in the middle attack. The system is realized by implementing in python and Foundry for smart contract development. Global Model is tested against data poisoning by flipping the labels and by introducing malicious nodes. Results found to be similar to that of Flower. |
1911.05204 | Hsiao-Yu Chen | Hsiao-yu Chen, Paul Kry, Etienne Vouga | Locking-free Simulation of Isometric Thin Plates | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To efficiently simulate very thin, inextensible materials like cloth or
paper, it is tempting to replace force-based thin-plate dynamics with hard
isometry constraints. Unfortunately, naive formulations of the constraints
induce membrane locking---artificial stiffening of bending modes due to the
inability of discrete kinematics to reproduce exact isometries. We propose a
simple set of meshless isometry constraints, based on moving-least-squares
averaging of the strain tensor, which do not lock, and which can be easily
incorporated into standard constrained Lagrangian dynamics integration.
| [
{
"created": "Tue, 12 Nov 2019 23:35:59 GMT",
"version": "v1"
}
] | 2019-11-14 | [
[
"Chen",
"Hsiao-yu",
""
],
[
"Kry",
"Paul",
""
],
[
"Vouga",
"Etienne",
""
]
] | To efficiently simulate very thin, inextensible materials like cloth or paper, it is tempting to replace force-based thin-plate dynamics with hard isometry constraints. Unfortunately, naive formulations of the constraints induce membrane locking---artificial stiffening of bending modes due to the inability of discrete kinematics to reproduce exact isometries. We propose a simple set of meshless isometry constraints, based on moving-least-squares averaging of the strain tensor, which do not lock, and which can be easily incorporated into standard constrained Lagrangian dynamics integration. |
2404.06762 | Zhengyuan Liu | Zhengyuan Liu, Stella Xin Yin, Geyu Lin, Nancy F. Chen | Personality-aware Student Simulation for Conversational Intelligent
Tutoring Systems | null | null | null | null | cs.CL cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced
learning experience. The emergence of large language models (LLMs) further
enables better human-machine interaction, and facilitates the development of
conversational ITSs in various disciplines such as math and language learning.
In dialogic teaching, recognizing and adapting to individual characteristics
can significantly enhance student engagement and learning efficiency. However,
characterizing and simulating student's persona remain challenging in training
and evaluating conversational ITSs. In this work, we propose a framework to
construct profiles of different student groups by refining and integrating both
cognitive and noncognitive aspects, and leverage LLMs for personality-aware
student simulation in a language learning scenario. We further enhance the
framework with multi-aspect validation, and conduct extensive analysis from
both teacher and student perspectives. Our experimental results show that
state-of-the-art LLMs can produce diverse student responses according to the
given language ability and personality traits, and trigger teacher's adaptive
scaffolding strategies.
| [
{
"created": "Wed, 10 Apr 2024 06:03:13 GMT",
"version": "v1"
}
] | 2024-04-11 | [
[
"Liu",
"Zhengyuan",
""
],
[
"Yin",
"Stella Xin",
""
],
[
"Lin",
"Geyu",
""
],
[
"Chen",
"Nancy F.",
""
]
] | Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience. The emergence of large language models (LLMs) further enables better human-machine interaction, and facilitates the development of conversational ITSs in various disciplines such as math and language learning. In dialogic teaching, recognizing and adapting to individual characteristics can significantly enhance student engagement and learning efficiency. However, characterizing and simulating student's persona remain challenging in training and evaluating conversational ITSs. In this work, we propose a framework to construct profiles of different student groups by refining and integrating both cognitive and noncognitive aspects, and leverage LLMs for personality-aware student simulation in a language learning scenario. We further enhance the framework with multi-aspect validation, and conduct extensive analysis from both teacher and student perspectives. Our experimental results show that state-of-the-art LLMs can produce diverse student responses according to the given language ability and personality traits, and trigger teacher's adaptive scaffolding strategies. |
2202.03918 | Michael Langberg | Michael Langberg and Michelle Effros | Network Coding Multicast Key-Capacity | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a multi-source multi-terminal noiseless network, the key-dissemination
problem involves the task of multicasting a secret key K from the network
sources to its terminals. As in secure multicast network-coding, in the
key-dissemination problem the source nodes have access to independent
randomness and, as the network is noiseless, the resulting key K is a function
of the sources' information. However, different from traditional forms of
multicast, in key-dissemination the key K need not consist of source messages,
but rather may be any function of the information generated at the sources, as
long as it is shared by all terminals. Allowing the shared key K to be a
mixture of source information grants a flexibility to the communication process
which gives rise to the potential of increased key-rates when compared to
traditional secure multicast. The multicast key-capacity is the supremum of
achievable key-rates, subject to the security requirement that the shared key
is not revealed to an eavesdropper with predefined eavesdropping capabilities.
The key-dissemination problem (termed also, secret key-agreement) has seen
significant studies over the past decades in memoryless network structures. In
this work, we initiate the study of key-dissemination in the context of
noiseless networks, i.e., network coding. In this context, we study
similarities and differences between traditional secure-multicast and the more
lenient task of key-dissemination.
| [
{
"created": "Tue, 8 Feb 2022 15:11:01 GMT",
"version": "v1"
},
{
"created": "Thu, 19 May 2022 15:39:40 GMT",
"version": "v2"
}
] | 2022-05-20 | [
[
"Langberg",
"Michael",
""
],
[
"Effros",
"Michelle",
""
]
] | For a multi-source multi-terminal noiseless network, the key-dissemination problem involves the task of multicasting a secret key K from the network sources to its terminals. As in secure multicast network-coding, in the key-dissemination problem the source nodes have access to independent randomness and, as the network is noiseless, the resulting key K is a function of the sources' information. However, different from traditional forms of multicast, in key-dissemination the key K need not consist of source messages, but rather may be any function of the information generated at the sources, as long as it is shared by all terminals. Allowing the shared key K to be a mixture of source information grants a flexibility to the communication process which gives rise to the potential of increased key-rates when compared to traditional secure multicast. The multicast key-capacity is the supremum of achievable key-rates, subject to the security requirement that the shared key is not revealed to an eavesdropper with predefined eavesdropping capabilities. The key-dissemination problem (termed also, secret key-agreement) has seen significant studies over the past decades in memoryless network structures. In this work, we initiate the study of key-dissemination in the context of noiseless networks, i.e., network coding. In this context, we study similarities and differences between traditional secure-multicast and the more lenient task of key-dissemination. |
1904.03522 | Roee Levy Leshem | Roee Levy Leshem, Raja Giryes | Taco-VC: A Single Speaker Tacotron based Voice Conversion with Limited
Data | Accepted to EUSIPCO 2020 | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces Taco-VC, a novel architecture for voice conversion
based on Tacotron synthesizer, which is a sequence-to-sequence with attention
model. The training of multi-speaker voice conversion systems requires a large
number of resources, both in training and corpus size. Taco-VC is implemented
using a single speaker Tacotron synthesizer based on Phonetic PosteriorGrams
(PPGs) and a single speaker WaveNet vocoder conditioned on mel spectrograms. To
enhance the converted speech quality, and to overcome over-smoothing, the
outputs of Tacotron are passed through a novel speechenhancement network, which
is composed of a combination of the phoneme recognition and Tacotron networks.
Our system is trained just with a single speaker corpus and adapts to new
speakers using only a few minutes of training data. Using mid-size public
datasets, our method outperforms the baseline in the VCC 2018 SPOKE
non-parallel voice conversion task and achieves competitive results compared to
multi-speaker networks trained on large private datasets.
| [
{
"created": "Sat, 6 Apr 2019 20:19:07 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2019 09:39:02 GMT",
"version": "v2"
},
{
"created": "Sat, 26 Oct 2019 08:53:29 GMT",
"version": "v3"
},
{
"created": "Fri, 19 Jun 2020 07:18:11 GMT",
"version": "v4"
}
] | 2020-06-22 | [
[
"Leshem",
"Roee Levy",
""
],
[
"Giryes",
"Raja",
""
]
] | This paper introduces Taco-VC, a novel architecture for voice conversion based on Tacotron synthesizer, which is a sequence-to-sequence with attention model. The training of multi-speaker voice conversion systems requires a large number of resources, both in training and corpus size. Taco-VC is implemented using a single speaker Tacotron synthesizer based on Phonetic PosteriorGrams (PPGs) and a single speaker WaveNet vocoder conditioned on mel spectrograms. To enhance the converted speech quality, and to overcome over-smoothing, the outputs of Tacotron are passed through a novel speechenhancement network, which is composed of a combination of the phoneme recognition and Tacotron networks. Our system is trained just with a single speaker corpus and adapts to new speakers using only a few minutes of training data. Using mid-size public datasets, our method outperforms the baseline in the VCC 2018 SPOKE non-parallel voice conversion task and achieves competitive results compared to multi-speaker networks trained on large private datasets. |
2103.15209 | Da Xu | Da Xu, Yuting Ye, Chuanwei Ruan | Understanding the role of importance weighting for deep learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent paper by Byrd & Lipton (2019), based on empirical observations,
raises a major concern on the impact of importance weighting for the
over-parameterized deep learning models. They observe that as long as the model
can separate the training data, the impact of importance weighting diminishes
as the training proceeds. Nevertheless, there lacks a rigorous characterization
of this phenomenon. In this paper, we provide formal characterizations and
theoretical justifications on the role of importance weighting with respect to
the implicit bias of gradient descent and margin-based learning theory. We
reveal both the optimization dynamics and generalization performance under deep
learning models. Our work not only explains the various novel phenomenons
observed for importance weighting in deep learning, but also extends to the
studies where the weights are being optimized as part of the model, which
applies to a number of topics under active research.
| [
{
"created": "Sun, 28 Mar 2021 19:44:47 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Xu",
"Da",
""
],
[
"Ye",
"Yuting",
""
],
[
"Ruan",
"Chuanwei",
""
]
] | The recent paper by Byrd & Lipton (2019), based on empirical observations, raises a major concern on the impact of importance weighting for the over-parameterized deep learning models. They observe that as long as the model can separate the training data, the impact of importance weighting diminishes as the training proceeds. Nevertheless, there lacks a rigorous characterization of this phenomenon. In this paper, we provide formal characterizations and theoretical justifications on the role of importance weighting with respect to the implicit bias of gradient descent and margin-based learning theory. We reveal both the optimization dynamics and generalization performance under deep learning models. Our work not only explains the various novel phenomenons observed for importance weighting in deep learning, but also extends to the studies where the weights are being optimized as part of the model, which applies to a number of topics under active research. |
2109.07321 | Roee Shraga PhD | Roee Shraga, Avigdor Gal | PoWareMatch: a Quality-aware Deep Learning Approach to Improve Human
Schema Matching | Technical report of the paper {\sf PoWareMatch}: a Quality-aware Deep
Learning Approach to Improve Human Schema Matching, accepted to ACM Journal
of Data and Information Quality (JDIQ), Special Issue on Deep Learning for
Data Quality | null | null | null | cs.DB cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Schema matching is a core task of any data integration process. Being
investigated in the fields of databases, AI, Semantic Web and data mining for
many years, the main challenge remains the ability to generate quality matches
among data concepts (e.g., database attributes). In this work, we examine a
novel angle on the behavior of humans as matchers, studying match creation as a
process. We analyze the dynamics of common evaluation measures (precision,
recall, and f-measure), with respect to this angle and highlight the need for
unbiased matching to support this analysis. Unbiased matching, a newly defined
concept that describes the common assumption that human decisions represent
reliable assessments of schemata correspondences, is, however, not an inherent
property of human matchers. In what follows, we design PoWareMatch that makes
use of a deep learning mechanism to calibrate and filter human matching
decisions adhering the quality of a match, which are then combined with
algorithmic matching to generate better match results. We provide an empirical
evidence, established based on an experiment with more than 200 human matchers
over common benchmarks, that PoWareMatch predicts well the benefit of extending
the match with an additional correspondence and generates high quality matches.
In addition, PoWareMatch outperforms state-of-the-art matching algorithms.
| [
{
"created": "Wed, 15 Sep 2021 14:24:56 GMT",
"version": "v1"
}
] | 2021-09-16 | [
[
"Shraga",
"Roee",
""
],
[
"Gal",
"Avigdor",
""
]
] | Schema matching is a core task of any data integration process. Being investigated in the fields of databases, AI, Semantic Web and data mining for many years, the main challenge remains the ability to generate quality matches among data concepts (e.g., database attributes). In this work, we examine a novel angle on the behavior of humans as matchers, studying match creation as a process. We analyze the dynamics of common evaluation measures (precision, recall, and f-measure), with respect to this angle and highlight the need for unbiased matching to support this analysis. Unbiased matching, a newly defined concept that describes the common assumption that human decisions represent reliable assessments of schemata correspondences, is, however, not an inherent property of human matchers. In what follows, we design PoWareMatch that makes use of a deep learning mechanism to calibrate and filter human matching decisions adhering the quality of a match, which are then combined with algorithmic matching to generate better match results. We provide an empirical evidence, established based on an experiment with more than 200 human matchers over common benchmarks, that PoWareMatch predicts well the benefit of extending the match with an additional correspondence and generates high quality matches. In addition, PoWareMatch outperforms state-of-the-art matching algorithms. |
1510.04585 | Kezhi Li | Kezhi Li | A Brief Survey of Image Processing Algorithms in Electrical Capacitance
Tomography | Internal Report, MRRC, University of Cambridge | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To study the fundamental physics of complex multiphase flow systems using
advanced measurement techniques, especially the electrical capacitance
tomography (ECT) approach, this article carries out an initial literature
review of the ECT method from a point of view of signal processing and
algorithm design. After introducing the physical laws governing the ECT system,
we will focus on various reconstruction techniques that are capable to recover
the image of the internal characteristics of a specified region based on the
measuring capacitances of multi-electrode sensors surrounding the region. Each
technique has its own advantages and limitations, and many algorithms have been
examined by simulations or experiments. Future researches in 3D reconstruction
and other potential improvements of the system are discussed in the end.
| [
{
"created": "Thu, 15 Oct 2015 15:36:03 GMT",
"version": "v1"
}
] | 2015-10-16 | [
[
"Li",
"Kezhi",
""
]
] | To study the fundamental physics of complex multiphase flow systems using advanced measurement techniques, especially the electrical capacitance tomography (ECT) approach, this article carries out an initial literature review of the ECT method from a point of view of signal processing and algorithm design. After introducing the physical laws governing the ECT system, we will focus on various reconstruction techniques that are capable to recover the image of the internal characteristics of a specified region based on the measuring capacitances of multi-electrode sensors surrounding the region. Each technique has its own advantages and limitations, and many algorithms have been examined by simulations or experiments. Future researches in 3D reconstruction and other potential improvements of the system are discussed in the end. |
2204.13792 | Recep Yusuf Bekci | Recep Yusuf Bekci, Yacine Mahdid, Jinling Xing, Nikita Letov, Ying
Zhang, Zahid Pasha | Probabilistic Models for Manufacturing Lead Times | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this study, we utilize Gaussian processes, probabilistic neural network,
natural gradient boosting, and quantile regression augmented gradient boosting
to model lead times of laser manufacturing processes. We introduce
probabilistic modelling in the domain and compare the models in terms of
different abilities. While providing a comparison between the models in
real-life data, our work has many use cases and substantial business value. Our
results indicate that all of the models beat the company estimation benchmark
that uses domain experience and have good calibration with the empirical
frequencies.
| [
{
"created": "Thu, 28 Apr 2022 21:51:52 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jun 2022 18:41:28 GMT",
"version": "v2"
}
] | 2022-06-30 | [
[
"Bekci",
"Recep Yusuf",
""
],
[
"Mahdid",
"Yacine",
""
],
[
"Xing",
"Jinling",
""
],
[
"Letov",
"Nikita",
""
],
[
"Zhang",
"Ying",
""
],
[
"Pasha",
"Zahid",
""
]
] | In this study, we utilize Gaussian processes, probabilistic neural network, natural gradient boosting, and quantile regression augmented gradient boosting to model lead times of laser manufacturing processes. We introduce probabilistic modelling in the domain and compare the models in terms of different abilities. While providing a comparison between the models in real-life data, our work has many use cases and substantial business value. Our results indicate that all of the models beat the company estimation benchmark that uses domain experience and have good calibration with the empirical frequencies. |
2402.04722 | Kit Gallagher | Kit Gallagher, Richard Creswell, Ben Lambert, Martin Robinson, Chon
Lok Lei, Gary R. Mirams, David J. Gavaghan | Ten simple rules for teaching sustainable software engineering | Prepared for submission to PLOS Computational Biology's 10 Simple
Rules collection | null | null | null | cs.CY cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational methods and associated software implementations are central to
every field of scientific investigation. Modern biological research,
particularly within systems biology, has relied heavily on the development of
software tools to process and organize increasingly large datasets, simulate
complex mechanistic models, provide tools for the analysis and management of
data, and visualize and organize outputs. However, developing high-quality
research software requires scientists to develop a host of software development
skills, and teaching these skills to students is challenging. There has been a
growing importance placed on ensuring reproducibility and good development
practices in computational research. However, less attention has been devoted
to informing the specific teaching strategies which are effective at nurturing
in researchers the complex skillset required to produce high-quality software
that, increasingly, is required to underpin both academic and industrial
biomedical research. Recent articles in the Ten Simple Rules collection have
discussed the teaching of foundational computer science and coding techniques
to biology students. We advance this discussion by describing the specific
steps for effectively teaching the necessary skills scientists need to develop
sustainable software packages which are fit for (re-)use in academic research
or more widely. Although our advice is likely to be applicable to all students
and researchers hoping to improve their software development skills, our
guidelines are directed towards an audience of students that have some
programming literacy but little formal training in software development or
engineering, typical of early doctoral students. These practices are also
applicable outside of doctoral training environments, and we believe they
should form a key part of postgraduate training schemes more generally in the
life sciences.
| [
{
"created": "Wed, 7 Feb 2024 10:16:20 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Gallagher",
"Kit",
""
],
[
"Creswell",
"Richard",
""
],
[
"Lambert",
"Ben",
""
],
[
"Robinson",
"Martin",
""
],
[
"Lei",
"Chon Lok",
""
],
[
"Mirams",
"Gary R.",
""
],
[
"Gavaghan",
"David J.",
""
]
] | Computational methods and associated software implementations are central to every field of scientific investigation. Modern biological research, particularly within systems biology, has relied heavily on the development of software tools to process and organize increasingly large datasets, simulate complex mechanistic models, provide tools for the analysis and management of data, and visualize and organize outputs. However, developing high-quality research software requires scientists to develop a host of software development skills, and teaching these skills to students is challenging. There has been a growing importance placed on ensuring reproducibility and good development practices in computational research. However, less attention has been devoted to informing the specific teaching strategies which are effective at nurturing in researchers the complex skillset required to produce high-quality software that, increasingly, is required to underpin both academic and industrial biomedical research. Recent articles in the Ten Simple Rules collection have discussed the teaching of foundational computer science and coding techniques to biology students. We advance this discussion by describing the specific steps for effectively teaching the necessary skills scientists need to develop sustainable software packages which are fit for (re-)use in academic research or more widely. Although our advice is likely to be applicable to all students and researchers hoping to improve their software development skills, our guidelines are directed towards an audience of students that have some programming literacy but little formal training in software development or engineering, typical of early doctoral students. These practices are also applicable outside of doctoral training environments, and we believe they should form a key part of postgraduate training schemes more generally in the life sciences. |
1808.10292 | Alexandros Gerbessiotis | Alexandros V. Gerbessiotis | A study of integer sorting on multicores | arXiv admin note: substantial text overlap with arXiv:1708.09495,
arXiv:1608.08648 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integer sorting on multicores and GPUs can be realized by a variety of
approaches that include variants of distribution-based methods such as
radix-sort, comparison-oriented algorithms such as deterministic regular
sampling and random sampling parallel sorting, and network-based algorithms
such as Batcher's bitonic sorting algorithm.
In this work we present an experimental study of integer sorting on multicore
processors. We have implemented serial and parallel radix-sort for various
radixes, deterministic regular oversampling and random oversampling parallel
sorting, and also some previously little explored or unexplored variants of
bitonic-sort and odd-even transposition sort.
The study uses multithreading and multiprocessing parallel programming
libraries with the C language implementations working under Open MPI,
MulticoreBSP, and BSPlib utilizing the same source code.
A secondary objective is to attempt to model the performance of these
algorithm implementations under the MBSP (Multi-memory BSP) model. We first
provide some general high-level observations on the performance of these
implementations. If we can conclude anything is that accurate prediction of
performance by taking into consideration architecture dependent features such
as the structure and characteristics of multiple memory hierarchies is
difficult and more often than not untenable. To some degree this is affected by
the overhead imposed by the high-level library used in the programming effort.
We can still draw however some reliable conclusions and reason about the
performance of these implementations using the MBSP model, thus making MBSP
useful and usable.
| [
{
"created": "Wed, 29 Aug 2018 14:28:35 GMT",
"version": "v1"
}
] | 2018-08-31 | [
[
"Gerbessiotis",
"Alexandros V.",
""
]
] | Integer sorting on multicores and GPUs can be realized by a variety of approaches that include variants of distribution-based methods such as radix-sort, comparison-oriented algorithms such as deterministic regular sampling and random sampling parallel sorting, and network-based algorithms such as Batcher's bitonic sorting algorithm. In this work we present an experimental study of integer sorting on multicore processors. We have implemented serial and parallel radix-sort for various radixes, deterministic regular oversampling and random oversampling parallel sorting, and also some previously little explored or unexplored variants of bitonic-sort and odd-even transposition sort. The study uses multithreading and multiprocessing parallel programming libraries with the C language implementations working under Open MPI, MulticoreBSP, and BSPlib utilizing the same source code. A secondary objective is to attempt to model the performance of these algorithm implementations under the MBSP (Multi-memory BSP) model. We first provide some general high-level observations on the performance of these implementations. If we can conclude anything is that accurate prediction of performance by taking into consideration architecture dependent features such as the structure and characteristics of multiple memory hierarchies is difficult and more often than not untenable. To some degree this is affected by the overhead imposed by the high-level library used in the programming effort. We can still draw however some reliable conclusions and reason about the performance of these implementations using the MBSP model, thus making MBSP useful and usable. |
2308.03151 | Zheng Ma | Zheng Ma, Mianzhi Pan, Wenhan Wu, Kanzhi Cheng, Jianbing Zhang,
Shujian Huang and Jiajun Chen | Food-500 Cap: A Fine-Grained Food Caption Benchmark for Evaluating
Vision-Language Models | Accepted at ACM Multimedia (ACMMM) 2023 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language models (VLMs) have shown impressive performance in
substantial downstream multi-modal tasks. However, only comparing the
fine-tuned performance on downstream tasks leads to the poor interpretability
of VLMs, which is adverse to their future improvement. Several prior works have
identified this issue and used various probing methods under a zero-shot
setting to detect VLMs' limitations, but they all examine VLMs using general
datasets instead of specialized ones. In practical applications, VLMs are
usually applied to specific scenarios, such as e-commerce and news fields, so
the generalization of VLMs in specific domains should be given more attention.
In this paper, we comprehensively investigate the capabilities of popular VLMs
in a specific field, the food domain. To this end, we build a food caption
dataset, Food-500 Cap, which contains 24,700 food images with 494 categories.
Each image is accompanied by a detailed caption, including fine-grained
attributes of food, such as the ingredient, shape, and color. We also provide a
culinary culture taxonomy that classifies each food category based on its
geographic origin in order to better analyze the performance differences of VLM
in different regions. Experiments on our proposed datasets demonstrate that
popular VLMs underperform in the food domain compared with their performance in
the general domain. Furthermore, our research reveals severe bias in VLMs'
ability to handle food items from different geographic regions. We adopt
diverse probing methods and evaluate nine VLMs belonging to different
architectures to verify the aforementioned observations. We hope that our study
will bring researchers' attention to VLM's limitations when applying them to
the domain of food or culinary cultures, and spur further investigations to
address this issue.
| [
{
"created": "Sun, 6 Aug 2023 15:56:31 GMT",
"version": "v1"
}
] | 2023-08-08 | [
[
"Ma",
"Zheng",
""
],
[
"Pan",
"Mianzhi",
""
],
[
"Wu",
"Wenhan",
""
],
[
"Cheng",
"Kanzhi",
""
],
[
"Zhang",
"Jianbing",
""
],
[
"Huang",
"Shujian",
""
],
[
"Chen",
"Jiajun",
""
]
] | Vision-language models (VLMs) have shown impressive performance in substantial downstream multi-modal tasks. However, only comparing the fine-tuned performance on downstream tasks leads to the poor interpretability of VLMs, which is adverse to their future improvement. Several prior works have identified this issue and used various probing methods under a zero-shot setting to detect VLMs' limitations, but they all examine VLMs using general datasets instead of specialized ones. In practical applications, VLMs are usually applied to specific scenarios, such as e-commerce and news fields, so the generalization of VLMs in specific domains should be given more attention. In this paper, we comprehensively investigate the capabilities of popular VLMs in a specific field, the food domain. To this end, we build a food caption dataset, Food-500 Cap, which contains 24,700 food images with 494 categories. Each image is accompanied by a detailed caption, including fine-grained attributes of food, such as the ingredient, shape, and color. We also provide a culinary culture taxonomy that classifies each food category based on its geographic origin in order to better analyze the performance differences of VLM in different regions. Experiments on our proposed datasets demonstrate that popular VLMs underperform in the food domain compared with their performance in the general domain. Furthermore, our research reveals severe bias in VLMs' ability to handle food items from different geographic regions. We adopt diverse probing methods and evaluate nine VLMs belonging to different architectures to verify the aforementioned observations. We hope that our study will bring researchers' attention to VLM's limitations when applying them to the domain of food or culinary cultures, and spur further investigations to address this issue. |
2402.08986 | Wenwei Zhao | Wenwei Zhao, Xiaowen Li, Shangqing Zhao, Jie Xu, Yao Liu, Zhuo Lu | Detecting Adversarial Spectrum Attacks via Distance to Decision Boundary
Statistics | 10 pages, 11 figures | null | null | null | cs.CR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning has been adopted for efficient cooperative spectrum sensing.
However, it incurs an additional security risk due to attacks leveraging
adversarial machine learning to create malicious spectrum sensing values to
deceive the fusion center, called adversarial spectrum attacks. In this paper,
we propose an efficient framework for detecting adversarial spectrum attacks.
Our design leverages the concept of the distance to the decision boundary (DDB)
observed at the fusion center and compares the training and testing DDB
distributions to identify adversarial spectrum attacks. We create a
computationally efficient way to compute the DDB for machine learning based
spectrum sensing systems. Experimental results based on realistic spectrum data
show that our method, under typical settings, achieves a high detection rate of
up to 99\% and maintains a low false alarm rate of less than 1\%. In addition,
our method to compute the DDB based on spectrum data achieves 54\%--64\%
improvements in computational efficiency over existing distance calculation
methods. The proposed DDB-based detection framework offers a practical and
efficient solution for identifying malicious sensing values created by
adversarial spectrum attacks.
| [
{
"created": "Wed, 14 Feb 2024 06:57:21 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Zhao",
"Wenwei",
""
],
[
"Li",
"Xiaowen",
""
],
[
"Zhao",
"Shangqing",
""
],
[
"Xu",
"Jie",
""
],
[
"Liu",
"Yao",
""
],
[
"Lu",
"Zhuo",
""
]
] | Machine learning has been adopted for efficient cooperative spectrum sensing. However, it incurs an additional security risk due to attacks leveraging adversarial machine learning to create malicious spectrum sensing values to deceive the fusion center, called adversarial spectrum attacks. In this paper, we propose an efficient framework for detecting adversarial spectrum attacks. Our design leverages the concept of the distance to the decision boundary (DDB) observed at the fusion center and compares the training and testing DDB distributions to identify adversarial spectrum attacks. We create a computationally efficient way to compute the DDB for machine learning based spectrum sensing systems. Experimental results based on realistic spectrum data show that our method, under typical settings, achieves a high detection rate of up to 99\% and maintains a low false alarm rate of less than 1\%. In addition, our method to compute the DDB based on spectrum data achieves 54\%--64\% improvements in computational efficiency over existing distance calculation methods. The proposed DDB-based detection framework offers a practical and efficient solution for identifying malicious sensing values created by adversarial spectrum attacks. |
1807.01053 | Sergey Goncharov | Sergey Goncharov, Julian Jakob, Renato Neves | A Semantics for Hybrid Iteration | Corrected version of a CONCUR'18 paper; more proof details | null | 10.4230/LIPIcs.CONCUR.2018.22 | null | cs.PL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently introduced notions of guarded traced (monoidal) category and
guarded (pre-)iterative monad aim at unifying different instances of partial
iteration whilst keeping in touch with the established theory of total
iteration and preserving its merits. In this paper we use these notions and the
corresponding stock of results to examine different types of iteration for
hybrid computations. As a starting point we use an available notion of hybrid
monad restricted to the category of sets, and modify it in order to obtain a
suitable notion of guarded iteration with guardedness interpreted as
progressiveness in time - we motivate this modification by our intention to
capture Zeno behaviour in an arguably general and feasible way. We illustrate
our results with a simple programming language for hybrid computations and
interpret it over the developed semantic foundations.
| [
{
"created": "Tue, 3 Jul 2018 09:47:52 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Feb 2019 15:59:02 GMT",
"version": "v2"
}
] | 2019-02-07 | [
[
"Goncharov",
"Sergey",
""
],
[
"Jakob",
"Julian",
""
],
[
"Neves",
"Renato",
""
]
] | The recently introduced notions of guarded traced (monoidal) category and guarded (pre-)iterative monad aim at unifying different instances of partial iteration whilst keeping in touch with the established theory of total iteration and preserving its merits. In this paper we use these notions and the corresponding stock of results to examine different types of iteration for hybrid computations. As a starting point we use an available notion of hybrid monad restricted to the category of sets, and modify it in order to obtain a suitable notion of guarded iteration with guardedness interpreted as progressiveness in time - we motivate this modification by our intention to capture Zeno behaviour in an arguably general and feasible way. We illustrate our results with a simple programming language for hybrid computations and interpret it over the developed semantic foundations. |
1807.03099 | Ayse Ipek Akin Atalay | Ayse Ipek Akin, Nafiseh Janatian, Ivan Stupia, and Luc Vandendorpe | SWIPT-based Real-Time Mobile Computing Systems: A Stochastic Geometry
Perspective | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the Internet of Things vision, recent years have seen the rise of
new horizons for the wireless ecosystem in which a very large number of mobile
low power devices interact to run sophisticated applications. The main
hindrance to the massive deployment of low power nodes is most probably the
prohibitive maintenance cost of battery replacement and the ecotoxicity of the
battery production/end-of-life. An emerging research direction to avoid battery
replacement is the combination of radio frequency energy harvesting and mobile
computing (MC). In this paper, we propose the use of simultaneous information
and power transfer (SWIPT) to control the distributed computation process while
delivering power to perform the computation tasks requested. A real-time MC
system is considered, meaning that the trade-off between the information rate
and the energy harvested must be carefully chosen to guarantee that the CPU may
perform tasks of given complexity before receiving a new control signal. In
order to provide a system-level perspective on the performance of SWIPT-MC
networks, we propose a mathematical framework based on stochastic geometry to
characterise the rate-energy trade-off of the system. The resulting achievable
performance region is then put in relation with the CPU energy consumption to
investigate the operating conditions of real-time computing systems. Finally,
numerical results illustrate the joint effect of the network densification and
the propagation environment on the optimisation of the CPU usage.
| [
{
"created": "Mon, 9 Jul 2018 13:17:33 GMT",
"version": "v1"
}
] | 2018-07-10 | [
[
"Akin",
"Ayse Ipek",
""
],
[
"Janatian",
"Nafiseh",
""
],
[
"Stupia",
"Ivan",
""
],
[
"Vandendorpe",
"Luc",
""
]
] | Driven by the Internet of Things vision, recent years have seen the rise of new horizons for the wireless ecosystem in which a very large number of mobile low power devices interact to run sophisticated applications. The main hindrance to the massive deployment of low power nodes is most probably the prohibitive maintenance cost of battery replacement and the ecotoxicity of the battery production/end-of-life. An emerging research direction to avoid battery replacement is the combination of radio frequency energy harvesting and mobile computing (MC). In this paper, we propose the use of simultaneous information and power transfer (SWIPT) to control the distributed computation process while delivering power to perform the computation tasks requested. A real-time MC system is considered, meaning that the trade-off between the information rate and the energy harvested must be carefully chosen to guarantee that the CPU may perform tasks of given complexity before receiving a new control signal. In order to provide a system-level perspective on the performance of SWIPT-MC networks, we propose a mathematical framework based on stochastic geometry to characterise the rate-energy trade-off of the system. The resulting achievable performance region is then put in relation with the CPU energy consumption to investigate the operating conditions of real-time computing systems. Finally, numerical results illustrate the joint effect of the network densification and the propagation environment on the optimisation of the CPU usage. |
2003.04865 | Yutaro Shigeto | Yutaro Shigeto, Yuya Yoshikawa, Jiaqing Lin, Akikazu Takeuchi | Video Caption Dataset for Describing Human Actions in Japanese | Accepted for LREC 2020. Dataset available at
https://actions.stair.center/captions.html | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, automatic video caption generation has attracted
considerable attention. This paper focuses on the generation of Japanese
captions for describing human actions. While most currently available video
caption datasets have been constructed for English, there is no equivalent
Japanese dataset. To address this, we constructed a large-scale Japanese video
caption dataset consisting of 79,822 videos and 399,233 captions. Each caption
in our dataset describes a video in the form of "who does what and where." To
describe human actions, it is important to identify the details of a person,
place, and action. Indeed, when we describe human actions, we usually mention
the scene, person, and action. In our experiments, we evaluated two caption
generation methods to obtain benchmark results. Further, we investigated
whether those generation methods could specify "who does what and where."
| [
{
"created": "Tue, 10 Mar 2020 17:15:48 GMT",
"version": "v1"
}
] | 2020-03-11 | [
[
"Shigeto",
"Yutaro",
""
],
[
"Yoshikawa",
"Yuya",
""
],
[
"Lin",
"Jiaqing",
""
],
[
"Takeuchi",
"Akikazu",
""
]
] | In recent years, automatic video caption generation has attracted considerable attention. This paper focuses on the generation of Japanese captions for describing human actions. While most currently available video caption datasets have been constructed for English, there is no equivalent Japanese dataset. To address this, we constructed a large-scale Japanese video caption dataset consisting of 79,822 videos and 399,233 captions. Each caption in our dataset describes a video in the form of "who does what and where." To describe human actions, it is important to identify the details of a person, place, and action. Indeed, when we describe human actions, we usually mention the scene, person, and action. In our experiments, we evaluated two caption generation methods to obtain benchmark results. Further, we investigated whether those generation methods could specify "who does what and where." |
2112.03154 | Haoran Xu | Haoran Xu, Sixing Lu, Zhongkai Sun, Chengyuan Ma, Chenlei Guo | VAE based Text Style Transfer with Pivot Words Enhancement Learning | Accepted at The eighteenth International Conference on Natural
Language Processing | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text Style Transfer (TST) aims to alter the underlying style of the source
text to another specific style while keeping the same content. Due to the
scarcity of high-quality parallel training data, unsupervised learning has
become a trending direction for TST tasks. In this paper, we propose a novel
VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER)
method which utilizes Variational AutoEncoder (VAE) and external style
embeddings to learn semantics and style distribution jointly. Additionally, we
introduce pivot words learning, which is applied to learn decisive words for a
specific style and thereby further improve the overall performance of the style
transfer. The proposed VT-STOWER can be scaled to different TST scenarios given
very limited and non-parallel training data with a novel and flexible style
strength control mechanism. Experiments demonstrate that the VT-STOWER
outperforms the state-of-the-art on sentiment, formality, and code-switching
TST tasks.
| [
{
"created": "Mon, 6 Dec 2021 16:41:26 GMT",
"version": "v1"
}
] | 2021-12-07 | [
[
"Xu",
"Haoran",
""
],
[
"Lu",
"Sixing",
""
],
[
"Sun",
"Zhongkai",
""
],
[
"Ma",
"Chengyuan",
""
],
[
"Guo",
"Chenlei",
""
]
] | Text Style Transfer (TST) aims to alter the underlying style of the source text to another specific style while keeping the same content. Due to the scarcity of high-quality parallel training data, unsupervised learning has become a trending direction for TST tasks. In this paper, we propose a novel VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method which utilizes Variational AutoEncoder (VAE) and external style embeddings to learn semantics and style distribution jointly. Additionally, we introduce pivot words learning, which is applied to learn decisive words for a specific style and thereby further improve the overall performance of the style transfer. The proposed VT-STOWER can be scaled to different TST scenarios given very limited and non-parallel training data with a novel and flexible style strength control mechanism. Experiments demonstrate that the VT-STOWER outperforms the state-of-the-art on sentiment, formality, and code-switching TST tasks. |
2308.09592 | Wenhao Chai | Wenhao Chai, Xun Guo, Gaoang Wang, Yan Lu | StableVideo: Text-driven Consistency-aware Diffusion Video Editing | ICCV 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based methods can generate realistic images and videos, but they
struggle to edit existing objects in a video while preserving their appearance
over time. This prevents diffusion models from being applied to natural video
editing in practical scenarios. In this paper, we tackle this problem by
introducing temporal dependency to existing text-driven diffusion models, which
allows them to generate consistent appearance for the edited objects.
Specifically, we develop a novel inter-frame propagation mechanism for
diffusion video editing, which leverages the concept of layered representations
to propagate the appearance information from one frame to the next. We then
build up a text-driven video editing framework based on this mechanism, namely
StableVideo, which can achieve consistency-aware video editing. Extensive
experiments demonstrate the strong editing capability of our approach. Compared
with state-of-the-art video editing methods, our approach shows superior
qualitative and quantitative results. Our code is available at
\href{https://github.com/rese1f/StableVideo}{this https URL}.
| [
{
"created": "Fri, 18 Aug 2023 14:39:16 GMT",
"version": "v1"
}
] | 2023-08-21 | [
[
"Chai",
"Wenhao",
""
],
[
"Guo",
"Xun",
""
],
[
"Wang",
"Gaoang",
""
],
[
"Lu",
"Yan",
""
]
] | Diffusion-based methods can generate realistic images and videos, but they struggle to edit existing objects in a video while preserving their appearance over time. This prevents diffusion models from being applied to natural video editing in practical scenarios. In this paper, we tackle this problem by introducing temporal dependency to existing text-driven diffusion models, which allows them to generate consistent appearance for the edited objects. Specifically, we develop a novel inter-frame propagation mechanism for diffusion video editing, which leverages the concept of layered representations to propagate the appearance information from one frame to the next. We then build up a text-driven video editing framework based on this mechanism, namely StableVideo, which can achieve consistency-aware video editing. Extensive experiments demonstrate the strong editing capability of our approach. Compared with state-of-the-art video editing methods, our approach shows superior qualitative and quantitative results. Our code is available at \href{https://github.com/rese1f/StableVideo}{this https URL}. |
1401.0870 | Laurence Aroquiaraj | I. Laurence Aroquiaraj and K. Thangavel | Pectoral Muscles Suppression in Digital Mammograms using Hybridization
of Soft Computing Methods | 8 pages, 6 figures | null | null | null | cs.CV cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Breast region segmentation is an essential prerequisite in computerized
analysis of mammograms. It aims at separating the breast tissue from the
background of the mammogram and it includes two independent segmentations. The
first segments the background region which usually contains annotations, labels
and frames from the whole breast region, while the second removes the pectoral
muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of
the breast tissue. In this paper we propose hybridization of Connected
Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed
methods worked good for separating pectoral region. After removal pectoral
muscle from the mammogram, further processing is confined to the breast region
alone. To demonstrate the validity of our segmentation algorithm, it is
extensively tested using over 322 mammographic images from the Mammographic
Image Analysis Society (MIAS) database. The segmentation results were evaluated
using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand
Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The
hybridization of fuzzy with straight line method is given more than 96% of the
curve segmentations to be adequate or better. In addition a comparison with
similar approaches from the state of the art has been given, obtaining slightly
improved results. Experimental results demonstrate the effectiveness of the
proposed approach.
| [
{
"created": "Sun, 5 Jan 2014 08:14:43 GMT",
"version": "v1"
}
] | 2014-01-07 | [
[
"Aroquiaraj",
"I. Laurence",
""
],
[
"Thangavel",
"K.",
""
]
] | Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach. |
2010.03412 | Weijia Xu | Weijia Xu, Xing Niu, Marine Carpuat | Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural
Machine Translation | Accepted at Findings of EMNLP 2020 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Iterative Back-Translation and Dual Learning effectively incorporate
monolingual training data in neural machine translation, they use different
objectives and heuristic gradient approximation strategies, and have not been
extensively compared. We introduce a novel dual reconstruction objective that
provides a unified view of Iterative Back-Translation and Dual Learning. It
motivates a theoretical analysis and controlled empirical study on
German-English and Turkish-English tasks, which both suggest that Iterative
Back-Translation is more effective than Dual Learning despite its relative
simplicity.
| [
{
"created": "Wed, 7 Oct 2020 13:40:32 GMT",
"version": "v1"
}
] | 2020-10-08 | [
[
"Xu",
"Weijia",
""
],
[
"Niu",
"Xing",
""
],
[
"Carpuat",
"Marine",
""
]
] | While Iterative Back-Translation and Dual Learning effectively incorporate monolingual training data in neural machine translation, they use different objectives and heuristic gradient approximation strategies, and have not been extensively compared. We introduce a novel dual reconstruction objective that provides a unified view of Iterative Back-Translation and Dual Learning. It motivates a theoretical analysis and controlled empirical study on German-English and Turkish-English tasks, which both suggest that Iterative Back-Translation is more effective than Dual Learning despite its relative simplicity. |
2112.02053 | Valdemar \v{S}v\'abensk\'y | Valdemar \v{S}v\'abensk\'y, Richard Weiss, Jack Cook, Jan Vykopal,
Pavel \v{C}eleda, Jens Mache, Radoslav Chudovsk\'y, Ankur Chattopadhyay | Evaluating Two Approaches to Assessing Student Progress in Cybersecurity
Exercises | ACM SIGCSE 2022 conference, 7 pages, 3 figures | null | 10.1145/3478431.3499414 | null | cs.CY cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cybersecurity students need to develop practical skills such as using
command-line tools. Hands-on exercises are the most direct way to assess these
skills, but assessing students' mastery is a challenging task for instructors.
We aim to alleviate this issue by modeling and visualizing student progress
automatically throughout the exercise. The progress is summarized by graph
models based on the shell commands students typed to achieve discrete tasks
within the exercise. We implemented two types of models and compared them using
data from 46 students at two universities. To evaluate our models, we surveyed
22 experienced computing instructors and qualitatively analyzed their
responses. The majority of instructors interpreted the graph models effectively
and identified strengths, weaknesses, and assessment use cases for each model.
Based on the evaluation, we provide recommendations to instructors and explain
how our graph models innovate teaching and promote further research. The impact
of this paper is threefold. First, it demonstrates how multiple institutions
can collaborate to share approaches to modeling student progress in hands-on
exercises. Second, our modeling techniques generalize to data from different
environments to support student assessment, even outside the cybersecurity
domain. Third, we share the acquired data and open-source software so that
others can use the models in their classes or research.
| [
{
"created": "Fri, 3 Dec 2021 18:08:27 GMT",
"version": "v1"
}
] | 2021-12-06 | [
[
"Švábenský",
"Valdemar",
""
],
[
"Weiss",
"Richard",
""
],
[
"Cook",
"Jack",
""
],
[
"Vykopal",
"Jan",
""
],
[
"Čeleda",
"Pavel",
""
],
[
"Mache",
"Jens",
""
],
[
"Chudovský",
"Radoslav",
""
],
[
"Chattopadhyay",
"Ankur",
""
]
] | Cybersecurity students need to develop practical skills such as using command-line tools. Hands-on exercises are the most direct way to assess these skills, but assessing students' mastery is a challenging task for instructors. We aim to alleviate this issue by modeling and visualizing student progress automatically throughout the exercise. The progress is summarized by graph models based on the shell commands students typed to achieve discrete tasks within the exercise. We implemented two types of models and compared them using data from 46 students at two universities. To evaluate our models, we surveyed 22 experienced computing instructors and qualitatively analyzed their responses. The majority of instructors interpreted the graph models effectively and identified strengths, weaknesses, and assessment use cases for each model. Based on the evaluation, we provide recommendations to instructors and explain how our graph models innovate teaching and promote further research. The impact of this paper is threefold. First, it demonstrates how multiple institutions can collaborate to share approaches to modeling student progress in hands-on exercises. Second, our modeling techniques generalize to data from different environments to support student assessment, even outside the cybersecurity domain. Third, we share the acquired data and open-source software so that others can use the models in their classes or research. |
2206.05968 | Elahe Ghasemi | Mohammad Rashid, Elahe Ghasemi and Javad B.Ebrahimi | Entropic Weighted Rank Function | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is known that the entropy function over a set of jointly distributed
random variables is a submodular set function. However, not any submodular
function is of this form. In this paper, we consider a family of submodular set
functions, called weighted rank functions of matroids, and study the necessary
or sufficient conditions under which they are entropic. We prove that weighted
rank functions are located on the boundary of the submodularity cone. For the
representable matroids over a characteristic 2 field, we show that the integer
valued weighted rank functions are entropic. We derive a necessary condition
for constant weight rank functions to be entropic and show that for the case of
graphic matroids, this condition is indeed sufficient. Since these functions
generalize the rank of a matroid, our findings generalize some of the results
of Abbe et. al. about entropic properties of the rank function of matroids.
| [
{
"created": "Mon, 13 Jun 2022 08:32:12 GMT",
"version": "v1"
}
] | 2022-06-14 | [
[
"Rashid",
"Mohammad",
""
],
[
"Ghasemi",
"Elahe",
""
],
[
"Ebrahimi",
"Javad B.",
""
]
] | It is known that the entropy function over a set of jointly distributed random variables is a submodular set function. However, not any submodular function is of this form. In this paper, we consider a family of submodular set functions, called weighted rank functions of matroids, and study the necessary or sufficient conditions under which they are entropic. We prove that weighted rank functions are located on the boundary of the submodularity cone. For the representable matroids over a characteristic 2 field, we show that the integer valued weighted rank functions are entropic. We derive a necessary condition for constant weight rank functions to be entropic and show that for the case of graphic matroids, this condition is indeed sufficient. Since these functions generalize the rank of a matroid, our findings generalize some of the results of Abbe et. al. about entropic properties of the rank function of matroids. |
2003.00003 | Klaus-Tycho Foerster | Utz Nisslmueller, Klaus-Tycho Foerster, Stefan Schmid, Christian
Decker | Toward Active and Passive Confidentiality Attacks On Cryptocurrency
Off-Chain Networks | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryptocurrency off-chain networks such as Lightning (e.g., Bitcoin) or Raiden
(e.g., Ethereum) aim to increase the scalability of traditional on-chain
transactions. To support nodes in learning about possible paths to route their
transactions, these networks need to provide gossip and probing mechanisms.
This paper explores whether these mechanisms may be exploited to infer
sensitive information about the flow of transactions, and eventually harm
privacy. In particular, we identify two threats, related to an active and a
passive adversary. The first is a probing attack: here the adversary aims to
detect the maximum amount which is transferable in a given direction over a
target channel by actively probing it and differentiating the response messages
it receives. The second is a timing attack: the adversary discovers how close
the destination of a routed payment actually is, by acting as a passive
man-in-the middle and analyzing the time deltas between sent messages and their
corresponding responses. We then analyze the limitations of these attacks and
propose remediations for scenarios in which they are able to produce accurate
results.
| [
{
"created": "Fri, 28 Feb 2020 08:56:08 GMT",
"version": "v1"
}
] | 2020-03-03 | [
[
"Nisslmueller",
"Utz",
""
],
[
"Foerster",
"Klaus-Tycho",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Decker",
"Christian",
""
]
] | Cryptocurrency off-chain networks such as Lightning (e.g., Bitcoin) or Raiden (e.g., Ethereum) aim to increase the scalability of traditional on-chain transactions. To support nodes in learning about possible paths to route their transactions, these networks need to provide gossip and probing mechanisms. This paper explores whether these mechanisms may be exploited to infer sensitive information about the flow of transactions, and eventually harm privacy. In particular, we identify two threats, related to an active and a passive adversary. The first is a probing attack: here the adversary aims to detect the maximum amount which is transferable in a given direction over a target channel by actively probing it and differentiating the response messages it receives. The second is a timing attack: the adversary discovers how close the destination of a routed payment actually is, by acting as a passive man-in-the middle and analyzing the time deltas between sent messages and their corresponding responses. We then analyze the limitations of these attacks and propose remediations for scenarios in which they are able to produce accurate results. |
0805.0184 | Youngchul Sung | Youngchul Sung, H. Vincent Poor and Heejung Yu | Information, Energy and Density for Ad Hoc Sensor Networks over
Correlated Random Fields: Large Deviations Analysis | Proceedings of the 2008 IEEE International Symposium on Information
Theory, Toronto, ON, Canada, July 6 - 11, 2008 | null | 10.1109/ISIT.2008.4595256 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using large deviations results that characterize the amount of information
per node on a two-dimensional (2-D) lattice, asymptotic behavior of a sensor
network deployed over a correlated random field for statistical inference is
investigated. Under a 2-D hidden Gauss-Markov random field model with symmetric
first order conditional autoregression, the behavior of the total information
[nats] and energy efficiency [nats/J] defined as the ratio of total gathered
information to the required energy is obtained as the coverage area, node
density and energy vary.
| [
{
"created": "Fri, 2 May 2008 07:36:28 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Sung",
"Youngchul",
""
],
[
"Poor",
"H. Vincent",
""
],
[
"Yu",
"Heejung",
""
]
] | Using large deviations results that characterize the amount of information per node on a two-dimensional (2-D) lattice, asymptotic behavior of a sensor network deployed over a correlated random field for statistical inference is investigated. Under a 2-D hidden Gauss-Markov random field model with symmetric first order conditional autoregression, the behavior of the total information [nats] and energy efficiency [nats/J] defined as the ratio of total gathered information to the required energy is obtained as the coverage area, node density and energy vary. |
1902.10898 | Ahmed Hareedy | Ahmed Hareedy, Robert Calderbank | LOCO Codes: Lexicographically-Ordered Constrained Codes | 17 pages (double column), 2 figures, accepted at the IEEE
Transactions on Information Theory (TIT), the short version was accepted at
the IEEE Information Theory Workshop (ITW), this version reflects comments
from reviewers at TIT and ITW | IEEE Transactions on Information Theory, vol. 66, no. 6, pp.
3572-3589, Jun. 2020 | 10.1109/TIT.2019.2943244 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Line codes make it possible to mitigate interference, to prevent short
pulses, and to generate streams of bipolar signals with no direct-current (DC)
power content through balancing. They find application in magnetic recording
(MR) devices, in Flash devices, in optical recording devices, and in some
computer standards. This paper introduces a new family of fixed-length, binary
constrained codes, named lexicographically-ordered constrained codes (LOCO
codes), for bipolar non-return-to-zero signaling. LOCO codes are
capacity-achieving, the lexicographic indexing enables simple, practical
encoding and decoding, and this simplicity is demonstrated through analysis of
circuit complexity. LOCO codes are easy to balance, and their inherent symmetry
minimizes the rate loss with respect to unbalanced codes having the same
constraints. Furthermore, LOCO codes that forbid certain patterns can be used
to alleviate inter-symbol interference in MR systems and inter-cell
interference in Flash systems. Numerical results demonstrate a gain of up to
10% in rate achieved by LOCO codes with respect to other practical constrained
codes, including run-length-limited codes, designed for the same purpose.
Simulation results suggest that it is possible to achieve a channel density
gain of about 20% in MR systems by using a LOCO code to encode only the parity
bits, limiting the rate loss, of a low-density parity-check code before
writing.
| [
{
"created": "Thu, 28 Feb 2019 05:22:33 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2019 19:44:15 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Jun 2019 21:54:49 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Sep 2019 05:26:09 GMT",
"version": "v4"
},
{
"created": "Mon, 14 Oct 2019 15:40:16 GMT",
"version": "v5"
}
] | 2020-05-26 | [
[
"Hareedy",
"Ahmed",
""
],
[
"Calderbank",
"Robert",
""
]
] | Line codes make it possible to mitigate interference, to prevent short pulses, and to generate streams of bipolar signals with no direct-current (DC) power content through balancing. They find application in magnetic recording (MR) devices, in Flash devices, in optical recording devices, and in some computer standards. This paper introduces a new family of fixed-length, binary constrained codes, named lexicographically-ordered constrained codes (LOCO codes), for bipolar non-return-to-zero signaling. LOCO codes are capacity-achieving, the lexicographic indexing enables simple, practical encoding and decoding, and this simplicity is demonstrated through analysis of circuit complexity. LOCO codes are easy to balance, and their inherent symmetry minimizes the rate loss with respect to unbalanced codes having the same constraints. Furthermore, LOCO codes that forbid certain patterns can be used to alleviate inter-symbol interference in MR systems and inter-cell interference in Flash systems. Numerical results demonstrate a gain of up to 10% in rate achieved by LOCO codes with respect to other practical constrained codes, including run-length-limited codes, designed for the same purpose. Simulation results suggest that it is possible to achieve a channel density gain of about 20% in MR systems by using a LOCO code to encode only the parity bits, limiting the rate loss, of a low-density parity-check code before writing. |
2012.05766 | Antonio Rago | Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago and
Francesca Toni | Deep Argumentative Explanations | 16 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent, widespread focus on eXplainable AI (XAI), explanations
computed by XAI methods tend to provide little insight into the functioning of
Neural Networks (NNs). We propose a novel framework for obtaining (local)
explanations from NNs while providing transparency about their inner workings,
and show how to deploy it for various neural architectures and tasks. We refer
to our novel explanations collectively as Deep Argumentative eXplanations (DAXs
in short), given that they reflect the deep structure of the underlying NNs and
that they are defined in terms of notions from computational argumentation, a
form of symbolic AI offering useful reasoning abstractions for explanation. We
evaluate DAXs empirically showing that they exhibit deep fidelity and low
computational cost. We also conduct human experiments indicating that DAXs are
comprehensible to humans and align with their judgement, while also being
competitive, in terms of user acceptance, with some existing approaches to XAI
that also have an argumentative spirit.
| [
{
"created": "Thu, 10 Dec 2020 15:55:09 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Mar 2021 16:46:05 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Mar 2021 17:12:30 GMT",
"version": "v3"
},
{
"created": "Mon, 14 Jun 2021 12:29:14 GMT",
"version": "v4"
}
] | 2021-06-15 | [
[
"Albini",
"Emanuele",
""
],
[
"Lertvittayakumjorn",
"Piyawat",
""
],
[
"Rago",
"Antonio",
""
],
[
"Toni",
"Francesca",
""
]
] | Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs). We propose a novel framework for obtaining (local) explanations from NNs while providing transparency about their inner workings, and show how to deploy it for various neural architectures and tasks. We refer to our novel explanations collectively as Deep Argumentative eXplanations (DAXs in short), given that they reflect the deep structure of the underlying NNs and that they are defined in terms of notions from computational argumentation, a form of symbolic AI offering useful reasoning abstractions for explanation. We evaluate DAXs empirically showing that they exhibit deep fidelity and low computational cost. We also conduct human experiments indicating that DAXs are comprehensible to humans and align with their judgement, while also being competitive, in terms of user acceptance, with some existing approaches to XAI that also have an argumentative spirit. |
2005.12175 | Guy Avni | Parand Alizadeh Alamdari, Guy Avni, Thomas A. Henzinger, Anna Lukina | Formal Methods with a Touch of Magic | Published in FMCAD 2020 | null | null | null | cs.LO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning and formal methods have complimentary benefits and
drawbacks. In this work, we address the controller-design problem with a
combination of techniques from both fields. The use of black-box neural
networks in deep reinforcement learning (deep RL) poses a challenge for such a
combination. Instead of reasoning formally about the output of deep RL, which
we call the {\em wizard}, we extract from it a decision-tree based model, which
we refer to as the {\em magic book}. Using the extracted model as an
intermediary, we are able to handle problems that are infeasible for either
deep RL or formal methods by themselves. First, we suggest, for the first time,
combining a magic book in a synthesis procedure. We synthesize a stand-alone
correct-by-design controller that enjoys the favorable performance of RL.
Second, we incorporate a magic book in a bounded model checking (BMC)
procedure. BMC allows us to find numerous traces of the plant under the control
of the wizard, which a user can use to increase the trustworthiness of the
wizard and direct further training.
| [
{
"created": "Mon, 25 May 2020 15:45:03 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Aug 2020 21:12:51 GMT",
"version": "v2"
}
] | 2020-08-26 | [
[
"Alamdari",
"Parand Alizadeh",
""
],
[
"Avni",
"Guy",
""
],
[
"Henzinger",
"Thomas A.",
""
],
[
"Lukina",
"Anna",
""
]
] | Machine learning and formal methods have complimentary benefits and drawbacks. In this work, we address the controller-design problem with a combination of techniques from both fields. The use of black-box neural networks in deep reinforcement learning (deep RL) poses a challenge for such a combination. Instead of reasoning formally about the output of deep RL, which we call the {\em wizard}, we extract from it a decision-tree based model, which we refer to as the {\em magic book}. Using the extracted model as an intermediary, we are able to handle problems that are infeasible for either deep RL or formal methods by themselves. First, we suggest, for the first time, combining a magic book in a synthesis procedure. We synthesize a stand-alone correct-by-design controller that enjoys the favorable performance of RL. Second, we incorporate a magic book in a bounded model checking (BMC) procedure. BMC allows us to find numerous traces of the plant under the control of the wizard, which a user can use to increase the trustworthiness of the wizard and direct further training. |
1609.07672 | Roy Fox | Roy Fox | Information-Theoretic Methods for Planning and Learning in Partially
Observable Markov Decision Processes | PhD thesis, Hebrew University of Jerusalem, 9/2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bounded agents are limited by intrinsic constraints on their ability to
process information that is available in their sensors and memory and choose
actions and memory updates. In this dissertation, we model these constraints as
information-rate constraints on communication channels connecting these various
internal components of the agent. We make four major contributions detailed
below and many smaller contributions detailed in each section. First, we
formulate the problem of optimizing the agent under both extrinsic and
intrinsic constraints and develop the main tools for solving it. Second, we
identify another reason for the challenging convergence properties of the
optimization algorithm, which is the bifurcation structure of the update
operator near phase transitions. Third, we study the special case of
linear-Gaussian dynamics and quadratic cost (LQG), where the optimal solution
has a particularly simple and solvable form. Fourth, we explore the learning
task, where the model of the world dynamics is unknown and sample-based updates
are used instead.
| [
{
"created": "Sat, 24 Sep 2016 20:45:37 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Mar 2017 04:57:49 GMT",
"version": "v2"
}
] | 2017-03-31 | [
[
"Fox",
"Roy",
""
]
] | Bounded agents are limited by intrinsic constraints on their ability to process information that is available in their sensors and memory and choose actions and memory updates. In this dissertation, we model these constraints as information-rate constraints on communication channels connecting these various internal components of the agent. We make four major contributions detailed below and many smaller contributions detailed in each section. First, we formulate the problem of optimizing the agent under both extrinsic and intrinsic constraints and develop the main tools for solving it. Second, we identify another reason for the challenging convergence properties of the optimization algorithm, which is the bifurcation structure of the update operator near phase transitions. Third, we study the special case of linear-Gaussian dynamics and quadratic cost (LQG), where the optimal solution has a particularly simple and solvable form. Fourth, we explore the learning task, where the model of the world dynamics is unknown and sample-based updates are used instead. |
2108.10755 | Jin Cheevaprawatdomrong | Jin Cheevaprawatdomrong, Alexandra Schofield, Attapol T. Rutherford | More Than Words: Collocation Tokenization for Latent Dirichlet
Allocation Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Traditionally, Latent Dirichlet Allocation (LDA) ingests words in a
collection of documents to discover their latent topics using word-document
co-occurrences. However, it is unclear how to achieve the best results for
languages without marked word boundaries such as Chinese and Thai. Here, we
explore the use of Pearson's chi-squared test, t-statistics, and Word Pair
Encoding (WPE) to produce tokens as input to the LDA model. The Chi-squared, t,
and WPE tokenizers are trained on Wikipedia text to look for words that should
be grouped together, such as compound nouns, proper nouns, and complex event
verbs. We propose a new metric for measuring the clustering quality in settings
where the vocabularies of the models differ. Based on this metric and other
established metrics, we show that topics trained with merged tokens result in
topic keys that are clearer, more coherent, and more effective at
distinguishing topics than those unmerged models.
| [
{
"created": "Tue, 24 Aug 2021 14:08:19 GMT",
"version": "v1"
}
] | 2021-08-25 | [
[
"Cheevaprawatdomrong",
"Jin",
""
],
[
"Schofield",
"Alexandra",
""
],
[
"Rutherford",
"Attapol T.",
""
]
] | Traditionally, Latent Dirichlet Allocation (LDA) ingests words in a collection of documents to discover their latent topics using word-document co-occurrences. However, it is unclear how to achieve the best results for languages without marked word boundaries such as Chinese and Thai. Here, we explore the use of Pearson's chi-squared test, t-statistics, and Word Pair Encoding (WPE) to produce tokens as input to the LDA model. The Chi-squared, t, and WPE tokenizers are trained on Wikipedia text to look for words that should be grouped together, such as compound nouns, proper nouns, and complex event verbs. We propose a new metric for measuring the clustering quality in settings where the vocabularies of the models differ. Based on this metric and other established metrics, we show that topics trained with merged tokens result in topic keys that are clearer, more coherent, and more effective at distinguishing topics than those unmerged models. |
1811.00907 | Ilia Kulikov | Ilia Kulikov, Alexander H. Miller, Kyunghyun Cho, Jason Weston | Importance of Search and Evaluation Strategies in Neural Dialogue
Modeling | iNLG 2019 camera ready version | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the impact of search strategies in neural dialogue modeling.
We first compare two standard search algorithms, greedy and beam search, as
well as our newly proposed iterative beam search which produces a more diverse
set of candidate responses. We evaluate these strategies in realistic full
conversations with humans and propose a model-based Bayesian calibration to
address annotator bias. These conversations are analyzed using two automatic
metrics: log-probabilities assigned by the model and utterance diversity. Our
experiments reveal that better search algorithms lead to higher rated
conversations. However, finding the optimal selection mechanism to choose from
a more diverse set of candidates is still an open question.
| [
{
"created": "Fri, 2 Nov 2018 14:54:50 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Dec 2018 10:11:54 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Nov 2019 11:21:56 GMT",
"version": "v3"
}
] | 2019-11-05 | [
[
"Kulikov",
"Ilia",
""
],
[
"Miller",
"Alexander H.",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Weston",
"Jason",
""
]
] | We investigate the impact of search strategies in neural dialogue modeling. We first compare two standard search algorithms, greedy and beam search, as well as our newly proposed iterative beam search which produces a more diverse set of candidate responses. We evaluate these strategies in realistic full conversations with humans and propose a model-based Bayesian calibration to address annotator bias. These conversations are analyzed using two automatic metrics: log-probabilities assigned by the model and utterance diversity. Our experiments reveal that better search algorithms lead to higher rated conversations. However, finding the optimal selection mechanism to choose from a more diverse set of candidates is still an open question. |
2005.04093 | Aleks Ontman | Joshua Porter, Aleks Ontman | Importing Relationships into a Running Graph Database Using Parallel
Processing | 5 pages, code provided on GitHub
https://github.com/Lnofeisone/graph-iterateRelationship | null | null | null | cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Importing relationships into a running graph database using multiple threads
running concurrently is a difficult task, as multiple threads cannot write
information to the same node at the same time. Here we present an algorithm in
which relationships are sorted into bins, then imported such that no two
threads ever access the same node concurrently. When this algorithm was
implemented as a procedure to run on the Neo4j graph database, it reduced the
time to import relationships by up to 69% when 32 threads were used.
| [
{
"created": "Tue, 5 May 2020 14:31:29 GMT",
"version": "v1"
}
] | 2020-05-11 | [
[
"Porter",
"Joshua",
""
],
[
"Ontman",
"Aleks",
""
]
] | Importing relationships into a running graph database using multiple threads running concurrently is a difficult task, as multiple threads cannot write information to the same node at the same time. Here we present an algorithm in which relationships are sorted into bins, then imported such that no two threads ever access the same node concurrently. When this algorithm was implemented as a procedure to run on the Neo4j graph database, it reduced the time to import relationships by up to 69% when 32 threads were used. |
2305.13021 | Ambre Davat | Ambre Davat (GIPSA-PCMD,LIG), V\'eronique Auberg\'e (LIG), Gang Feng
(GIPSA-lab) | Can we hear physical and social space together through prosody? | null | Speech Prosody 2020, May 2020, Tokyo, Japan. pp.715-719 | 10.21437/SpeechProsody.2020-146 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When human listeners try to guess the spatial position of a speech source,
they are influenced by the speaker's production level, regardless of the
intensity level reaching their ears. Because the perception of distance is a
very difficult task, they rely on their own experience, which tells them that a
whispering talker is close to them, and that a shouting talker is far away.
This study aims to test if similar results could be obtained for prosodic
variations produced by a human speaker in an everyday life environment. It
consists in a localization task, during which blindfolded subjects had to
estimate the incoming voice direction, speaker orientation and distance of a
trained female speaker, who uttered single words, following instructions
concerning intensity and social-affect to be performed. This protocol was
implemented in two experiments. First, a complex pretext task was used in order
to distract the subjects from the strange behavior of the speaker. On the
contrary, during the second experiment, the subjects were fully aware of the
prosodic variations, which allowed them to adapt their perception. Results show
the importance of the pretext task, and suggest that the perception of the
speaker's orientation can be influenced by voice intensity.
| [
{
"created": "Mon, 22 May 2023 13:25:01 GMT",
"version": "v1"
}
] | 2023-05-23 | [
[
"Davat",
"Ambre",
"",
"GIPSA-PCMD,LIG"
],
[
"Aubergé",
"Véronique",
"",
"LIG"
],
[
"Feng",
"Gang",
"",
"GIPSA-lab"
]
] | When human listeners try to guess the spatial position of a speech source, they are influenced by the speaker's production level, regardless of the intensity level reaching their ears. Because the perception of distance is a very difficult task, they rely on their own experience, which tells them that a whispering talker is close to them, and that a shouting talker is far away. This study aims to test if similar results could be obtained for prosodic variations produced by a human speaker in an everyday life environment. It consists in a localization task, during which blindfolded subjects had to estimate the incoming voice direction, speaker orientation and distance of a trained female speaker, who uttered single words, following instructions concerning intensity and social-affect to be performed. This protocol was implemented in two experiments. First, a complex pretext task was used in order to distract the subjects from the strange behavior of the speaker. On the contrary, during the second experiment, the subjects were fully aware of the prosodic variations, which allowed them to adapt their perception. Results show the importance of the pretext task, and suggest that the perception of the speaker's orientation can be influenced by voice intensity. |
2004.13839 | Louis Falissard | Louis Falissard, Claire Morgand, Sylvie Roussel, Claire Imbaud, Walid
Ghosn, Karim Bounebache, Gr\'egoire Rey | Neural translation and automated recognition of ICD10 medical entities
from natural language | null | null | null | null | cs.CL cs.CY cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recognition of medical entities from natural language is an ubiquitous
problem in the medical field, with applications ranging from medical act coding
to the analysis of electronic health data for public health. It is however a
complex task usually requiring human expert intervention, thus making it
expansive and time consuming. The recent advances in artificial intelligence,
specifically the raise of deep learning methods, has enabled computers to make
efficient decisions on a number of complex problems, with the notable example
of neural sequence models and their powerful applications in natural language
processing. They however require a considerable amount of data to learn from,
which is typically their main limiting factor. However, the C\'epiDc stores an
exhaustive database of death certificates at the French national scale,
amounting to several millions of natural language examples provided with their
associated human coded medical entities available to the machine learning
practitioner. This article investigates the applications of deep neural
sequence models to the medical entity recognition from natural language
problem.
| [
{
"created": "Fri, 27 Mar 2020 18:17:53 GMT",
"version": "v1"
},
{
"created": "Wed, 6 May 2020 10:30:24 GMT",
"version": "v2"
}
] | 2020-05-07 | [
[
"Falissard",
"Louis",
""
],
[
"Morgand",
"Claire",
""
],
[
"Roussel",
"Sylvie",
""
],
[
"Imbaud",
"Claire",
""
],
[
"Ghosn",
"Walid",
""
],
[
"Bounebache",
"Karim",
""
],
[
"Rey",
"Grégoire",
""
]
] | The recognition of medical entities from natural language is an ubiquitous problem in the medical field, with applications ranging from medical act coding to the analysis of electronic health data for public health. It is however a complex task usually requiring human expert intervention, thus making it expansive and time consuming. The recent advances in artificial intelligence, specifically the raise of deep learning methods, has enabled computers to make efficient decisions on a number of complex problems, with the notable example of neural sequence models and their powerful applications in natural language processing. They however require a considerable amount of data to learn from, which is typically their main limiting factor. However, the C\'epiDc stores an exhaustive database of death certificates at the French national scale, amounting to several millions of natural language examples provided with their associated human coded medical entities available to the machine learning practitioner. This article investigates the applications of deep neural sequence models to the medical entity recognition from natural language problem. |
2203.09446 | Fabian Bongratz | Fabian Bongratz, Anne-Marie Rickmann, Sebastian P\"olsterl, Christian
Wachinger | Vox2Cortex: Fast Explicit Reconstruction of Cortical Surfaces from 3D
MRI Scans with Geometric Deep Neural Networks | Accepted at CVPR 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The reconstruction of cortical surfaces from brain magnetic resonance imaging
(MRI) scans is essential for quantitative analyses of cortical thickness and
sulcal morphology. Although traditional and deep learning-based algorithmic
pipelines exist for this purpose, they have two major drawbacks: lengthy
runtimes of multiple hours (traditional) or intricate post-processing, such as
mesh extraction and topology correction (deep learning-based). In this work, we
address both of these issues and propose Vox2Cortex, a deep learning-based
algorithm that directly yields topologically correct, three-dimensional meshes
of the boundaries of the cortex. Vox2Cortex leverages convolutional and graph
convolutional neural networks to deform an initial template to the densely
folded geometry of the cortex represented by an input MRI scan. We show in
extensive experiments on three brain MRI datasets that our meshes are as
accurate as the ones reconstructed by state-of-the-art methods in the field,
without the need for time- and resource-intensive post-processing. To
accurately reconstruct the tightly folded cortex, we work with meshes
containing about 168,000 vertices at test time, scaling deep explicit
reconstruction methods to a new level.
| [
{
"created": "Thu, 17 Mar 2022 17:06:00 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Mar 2022 11:10:19 GMT",
"version": "v2"
}
] | 2022-03-21 | [
[
"Bongratz",
"Fabian",
""
],
[
"Rickmann",
"Anne-Marie",
""
],
[
"Pölsterl",
"Sebastian",
""
],
[
"Wachinger",
"Christian",
""
]
] | The reconstruction of cortical surfaces from brain magnetic resonance imaging (MRI) scans is essential for quantitative analyses of cortical thickness and sulcal morphology. Although traditional and deep learning-based algorithmic pipelines exist for this purpose, they have two major drawbacks: lengthy runtimes of multiple hours (traditional) or intricate post-processing, such as mesh extraction and topology correction (deep learning-based). In this work, we address both of these issues and propose Vox2Cortex, a deep learning-based algorithm that directly yields topologically correct, three-dimensional meshes of the boundaries of the cortex. Vox2Cortex leverages convolutional and graph convolutional neural networks to deform an initial template to the densely folded geometry of the cortex represented by an input MRI scan. We show in extensive experiments on three brain MRI datasets that our meshes are as accurate as the ones reconstructed by state-of-the-art methods in the field, without the need for time- and resource-intensive post-processing. To accurately reconstruct the tightly folded cortex, we work with meshes containing about 168,000 vertices at test time, scaling deep explicit reconstruction methods to a new level. |
1812.03219 | Aaron Springer | Aaron Springer, Victoria Hollis, Steve Whittaker | Dice in the Black Box: User Experiences with an Inscrutable Algorithm | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate that users may be prone to place an inordinate amount of trust
in black box algorithms that are framed as intelligent. We deploy an algorithm
that purportedly assesses the positivity and negativity of a users' writing
emotional writing. In actuality, the algorithm responds in a random fashion. We
qualitatively examine the paths to trust that users followed while testing the
system. In light of the ease with which users may trust systems exhibiting
"intelligent behavior" we recommend corrective approaches.
| [
{
"created": "Fri, 7 Dec 2018 21:37:49 GMT",
"version": "v1"
}
] | 2018-12-11 | [
[
"Springer",
"Aaron",
""
],
[
"Hollis",
"Victoria",
""
],
[
"Whittaker",
"Steve",
""
]
] | We demonstrate that users may be prone to place an inordinate amount of trust in black box algorithms that are framed as intelligent. We deploy an algorithm that purportedly assesses the positivity and negativity of a users' writing emotional writing. In actuality, the algorithm responds in a random fashion. We qualitatively examine the paths to trust that users followed while testing the system. In light of the ease with which users may trust systems exhibiting "intelligent behavior" we recommend corrective approaches. |
2008.00188 | Shihao Xu | Haocong Rao, Shihao Xu, Xiping Hu, Jun Cheng, Bin Hu | Augmented Skeleton Based Contrastive Action Learning with Momentum LSTM
for Unsupervised Action Recognition | Accepted by Information Sciences. Our codes are available at
https://github.com/Mikexu007/AS-CAL | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action recognition via 3D skeleton data is an emerging important topic in
these years. Most existing methods either extract hand-crafted descriptors or
learn action representations by supervised learning paradigms that require
massive labeled data. In this paper, we for the first time propose a
contrastive action learning paradigm named AS-CAL that can leverage different
augmentations of unlabeled skeleton data to learn action representations in an
unsupervised manner. Specifically, we first propose to contrast similarity
between augmented instances (query and key) of the input skeleton sequence,
which are transformed by multiple novel augmentation strategies, to learn
inherent action patterns ("pattern-invariance") of different skeleton
transformations. Second, to encourage learning the pattern-invariance with more
consistent action representations, we propose a momentum LSTM, which is
implemented as the momentum-based moving average of LSTM based query encoder,
to encode long-term action dynamics of the key sequence. Third, we introduce a
queue to store the encoded keys, which allows our model to flexibly reuse
proceeding keys and build a more consistent dictionary to improve contrastive
learning. Last, by temporally averaging the hidden states of action learned by
the query encoder, a novel representation named Contrastive Action Encoding
(CAE) is proposed to represent human's action effectively. Extensive
experiments show that our approach typically improves existing hand-crafted
methods by 10-50% top-1 accuracy, and it can achieve comparable or even
superior performance to numerous supervised learning methods.
| [
{
"created": "Sat, 1 Aug 2020 06:37:57 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Aug 2020 01:32:35 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Aug 2020 13:14:59 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Apr 2021 08:14:45 GMT",
"version": "v4"
}
] | 2021-04-05 | [
[
"Rao",
"Haocong",
""
],
[
"Xu",
"Shihao",
""
],
[
"Hu",
"Xiping",
""
],
[
"Cheng",
"Jun",
""
],
[
"Hu",
"Bin",
""
]
] | Action recognition via 3D skeleton data is an emerging important topic in these years. Most existing methods either extract hand-crafted descriptors or learn action representations by supervised learning paradigms that require massive labeled data. In this paper, we for the first time propose a contrastive action learning paradigm named AS-CAL that can leverage different augmentations of unlabeled skeleton data to learn action representations in an unsupervised manner. Specifically, we first propose to contrast similarity between augmented instances (query and key) of the input skeleton sequence, which are transformed by multiple novel augmentation strategies, to learn inherent action patterns ("pattern-invariance") of different skeleton transformations. Second, to encourage learning the pattern-invariance with more consistent action representations, we propose a momentum LSTM, which is implemented as the momentum-based moving average of LSTM based query encoder, to encode long-term action dynamics of the key sequence. Third, we introduce a queue to store the encoded keys, which allows our model to flexibly reuse proceeding keys and build a more consistent dictionary to improve contrastive learning. Last, by temporally averaging the hidden states of action learned by the query encoder, a novel representation named Contrastive Action Encoding (CAE) is proposed to represent human's action effectively. Extensive experiments show that our approach typically improves existing hand-crafted methods by 10-50% top-1 accuracy, and it can achieve comparable or even superior performance to numerous supervised learning methods. |
2211.10030 | Qinggang Zhang | Qinggang Zhang, Junnan Dong, Keyu Duan, Xiao Huang, Yezi Liu, Linchuan
Xu | Contrastive Knowledge Graph Error Detection | null | CIKM 2022: Proceedings of the 31st ACM International Conference on
Information & Knowledge Management | 10.1145/3511808.3557264 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Graph (KG) errors introduce non-negligible noise, severely
affecting KG-related downstream tasks. Detecting errors in KGs is challenging
since the patterns of errors are unknown and diverse, while ground-truth labels
are rare or even unavailable. A traditional solution is to construct logical
rules to verify triples, but it is not generalizable since different KGs have
distinct rules with domain knowledge involved. Recent studies focus on
designing tailored detectors or ranking triples based on KG embedding loss.
However, they all rely on negative samples for training, which are generated by
randomly replacing the head or tail entity of existing triples. Such a negative
sampling strategy is not enough for prototyping practical KG errors, e.g.,
(Bruce_Lee, place_of_birth, China), in which the three elements are often
relevant, although mismatched. We desire a more effective unsupervised learning
mechanism tailored for KG error detection. To this end, we propose a novel
framework - ContrAstive knowledge Graph Error Detection (CAGED). It introduces
contrastive learning into KG learning and provides a novel way of modeling KG.
Instead of following the traditional setting, i.e., considering entities as
nodes and relations as semantic edges, CAGED augments a KG into different
hyper-views, by regarding each relational triple as a node. After joint
training with KG embedding and contrastive learning loss, CAGED assesses the
trustworthiness of each triple based on two learning signals, i.e., the
consistency of triple representations across multi-views and the
self-consistency within the triple. Extensive experiments on three real-world
KGs show that CAGED outperforms state-of-the-art methods in KG error detection.
Our codes and datasets are available at https://github.com/Qing145/CAGED.git.
| [
{
"created": "Fri, 18 Nov 2022 05:01:19 GMT",
"version": "v1"
}
] | 2022-11-21 | [
[
"Zhang",
"Qinggang",
""
],
[
"Dong",
"Junnan",
""
],
[
"Duan",
"Keyu",
""
],
[
"Huang",
"Xiao",
""
],
[
"Liu",
"Yezi",
""
],
[
"Xu",
"Linchuan",
""
]
] | Knowledge Graph (KG) errors introduce non-negligible noise, severely affecting KG-related downstream tasks. Detecting errors in KGs is challenging since the patterns of errors are unknown and diverse, while ground-truth labels are rare or even unavailable. A traditional solution is to construct logical rules to verify triples, but it is not generalizable since different KGs have distinct rules with domain knowledge involved. Recent studies focus on designing tailored detectors or ranking triples based on KG embedding loss. However, they all rely on negative samples for training, which are generated by randomly replacing the head or tail entity of existing triples. Such a negative sampling strategy is not enough for prototyping practical KG errors, e.g., (Bruce_Lee, place_of_birth, China), in which the three elements are often relevant, although mismatched. We desire a more effective unsupervised learning mechanism tailored for KG error detection. To this end, we propose a novel framework - ContrAstive knowledge Graph Error Detection (CAGED). It introduces contrastive learning into KG learning and provides a novel way of modeling KG. Instead of following the traditional setting, i.e., considering entities as nodes and relations as semantic edges, CAGED augments a KG into different hyper-views, by regarding each relational triple as a node. After joint training with KG embedding and contrastive learning loss, CAGED assesses the trustworthiness of each triple based on two learning signals, i.e., the consistency of triple representations across multi-views and the self-consistency within the triple. Extensive experiments on three real-world KGs show that CAGED outperforms state-of-the-art methods in KG error detection. Our codes and datasets are available at https://github.com/Qing145/CAGED.git. |
1905.06668 | Achim Blumensath | Achim Blumensath and Felix Wolf | Bisimulation Invariant Monadic-Second Order Logic in the Finite | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider bisimulation-invariant monadic second-order logic over various
classes of finite transition systems. We present several combinatorial
characterisations of when the expressive power of this fragment coincides with
that of the modal mu-calculus. Using these characterisations we prove for some
simple classes of transition systems that this is indeed the case. In
particular, we show that, over the class of all finite transition systems with
Cantor-Bendixson rank at most k, bisimulation-invariant MSO coincides with
L_mu.
| [
{
"created": "Thu, 16 May 2019 11:37:41 GMT",
"version": "v1"
}
] | 2019-05-17 | [
[
"Blumensath",
"Achim",
""
],
[
"Wolf",
"Felix",
""
]
] | We consider bisimulation-invariant monadic second-order logic over various classes of finite transition systems. We present several combinatorial characterisations of when the expressive power of this fragment coincides with that of the modal mu-calculus. Using these characterisations we prove for some simple classes of transition systems that this is indeed the case. In particular, we show that, over the class of all finite transition systems with Cantor-Bendixson rank at most k, bisimulation-invariant MSO coincides with L_mu. |
2311.05050 | Wanda Hou | Wanda Hou, Miao Li, Yi-Zhuang You | Quantum Generative Modeling of Sequential Data with Trainable Token
Embedding | 5 pages, 4 figures | null | null | null | cs.LG quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models are a class of machine learning models that aim to learn
the underlying probability distribution of data. Unlike discriminative models,
generative models focus on capturing the data's inherent structure, allowing
them to generate new samples that resemble the original data. To fully exploit
the potential of modeling probability distributions using quantum physics, a
quantum-inspired generative model known as the Born machines have shown great
advancements in learning classical and quantum data over matrix product
state(MPS) framework. The Born machines support tractable log-likelihood,
autoregressive and mask sampling, and have shown outstanding performance in
various unsupervised learning tasks. However, much of the current research has
been centered on improving the expressive power of MPS, predominantly embedding
each token directly by a corresponding tensor index. In this study, we
generalize the embedding method into trainable quantum measurement operators
that can be simultaneously honed with MPS. Our study indicated that combined
with trainable embedding, Born machines can exhibit better performance and
learn deeper correlations from the dataset.
| [
{
"created": "Wed, 8 Nov 2023 22:56:37 GMT",
"version": "v1"
}
] | 2023-11-14 | [
[
"Hou",
"Wanda",
""
],
[
"Li",
"Miao",
""
],
[
"You",
"Yi-Zhuang",
""
]
] | Generative models are a class of machine learning models that aim to learn the underlying probability distribution of data. Unlike discriminative models, generative models focus on capturing the data's inherent structure, allowing them to generate new samples that resemble the original data. To fully exploit the potential of modeling probability distributions using quantum physics, a quantum-inspired generative model known as the Born machines have shown great advancements in learning classical and quantum data over matrix product state(MPS) framework. The Born machines support tractable log-likelihood, autoregressive and mask sampling, and have shown outstanding performance in various unsupervised learning tasks. However, much of the current research has been centered on improving the expressive power of MPS, predominantly embedding each token directly by a corresponding tensor index. In this study, we generalize the embedding method into trainable quantum measurement operators that can be simultaneously honed with MPS. Our study indicated that combined with trainable embedding, Born machines can exhibit better performance and learn deeper correlations from the dataset. |
2407.19451 | Chengan He | Chengan He, Xin Sun, Zhixin Shu, Fujun Luan, S\"oren Pirk, Jorge
Alejandro Amador Herrera, Dominik L. Michels, Tuanfeng Y. Wang, Meng Zhang,
Holly Rushmeier, Yi Zhou | Perm: A Parametric Representation for Multi-Style 3D Hair Modeling | Project page: https://cs.yale.edu/homes/che/projects/perm/ | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | We present Perm, a learned parametric model of human 3D hair designed to
facilitate various hair-related applications. Unlike previous work that jointly
models the global hair shape and local strand details, we propose to
disentangle them using a PCA-based strand representation in the frequency
domain, thereby allowing more precise editing and output control. Specifically,
we leverage our strand representation to fit and decompose hair geometry
textures into low- to high-frequency hair structures. These decomposed textures
are later parameterized with different generative models, emulating common
stages in the hair modeling process. We conduct extensive experiments to
validate the architecture design of \textsc{Perm}, and finally deploy the
trained model as a generic prior to solve task-agnostic problems, further
showcasing its flexibility and superiority in tasks such as 3D hair
parameterization, hairstyle interpolation, single-view hair reconstruction, and
hair-conditioned image generation. Our code, data, and supplemental can be
found at our project page: https://cs.yale.edu/homes/che/projects/perm/
| [
{
"created": "Sun, 28 Jul 2024 10:05:11 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 04:10:53 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2024 04:01:03 GMT",
"version": "v3"
}
] | 2024-08-09 | [
[
"He",
"Chengan",
""
],
[
"Sun",
"Xin",
""
],
[
"Shu",
"Zhixin",
""
],
[
"Luan",
"Fujun",
""
],
[
"Pirk",
"Sören",
""
],
[
"Herrera",
"Jorge Alejandro Amador",
""
],
[
"Michels",
"Dominik L.",
""
],
[
"Wang",
"Tuanfeng Y.",
""
],
[
"Zhang",
"Meng",
""
],
[
"Rushmeier",
"Holly",
""
],
[
"Zhou",
"Yi",
""
]
] | We present Perm, a learned parametric model of human 3D hair designed to facilitate various hair-related applications. Unlike previous work that jointly models the global hair shape and local strand details, we propose to disentangle them using a PCA-based strand representation in the frequency domain, thereby allowing more precise editing and output control. Specifically, we leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures. These decomposed textures are later parameterized with different generative models, emulating common stages in the hair modeling process. We conduct extensive experiments to validate the architecture design of \textsc{Perm}, and finally deploy the trained model as a generic prior to solve task-agnostic problems, further showcasing its flexibility and superiority in tasks such as 3D hair parameterization, hairstyle interpolation, single-view hair reconstruction, and hair-conditioned image generation. Our code, data, and supplemental can be found at our project page: https://cs.yale.edu/homes/che/projects/perm/ |
1410.3688 | Khammassi Iyed Mr | Iyed Khammassi, Rachid Elazouzi, Majed Haddad and Issam Mabrouki | A Game Theoretic Model for Network Virus Protection | Technical report, 8 pages, 10 figures | null | null | null | cs.GT cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The network virus propagation is influenced by various factors, and some of
them are neglected in most of the existed models in the literature. In this
paper, we study the network virus propagation based on the the epidemiological
viewpoint. We assume that nodes can be equipped with protection against virus
and the security of a node depends not only on his protection strategy but also
by those chosen by other nodes in the network. A crucial aspect is whether
owners of device, e.g., either smartphones, machines or tablets, are willing to
be equipped to protect themselves or to take the risk to be contaminated in
order to avoid the payment for a new antivirus. We model the interaction
between nodes as a non-cooperative games where the node has two strategies:
either to update the antivirus or not. To this aim, we provide a full
characterization of the equilibria of the game and we investigate the impact of
the price of protection on the equilibrium as well as the efficiency of the
protection at equilibrium. Further we consider more realistic scenarios in
which the dynamic of sources that disseminate the virus, evolves as function of
the popularity of virus. In this work, the interest in the virus by sources
evolves under the Influence Linear Threshold (HILT) model.
| [
{
"created": "Tue, 14 Oct 2014 13:37:45 GMT",
"version": "v1"
}
] | 2014-10-15 | [
[
"Khammassi",
"Iyed",
""
],
[
"Elazouzi",
"Rachid",
""
],
[
"Haddad",
"Majed",
""
],
[
"Mabrouki",
"Issam",
""
]
] | The network virus propagation is influenced by various factors, and some of them are neglected in most of the existed models in the literature. In this paper, we study the network virus propagation based on the the epidemiological viewpoint. We assume that nodes can be equipped with protection against virus and the security of a node depends not only on his protection strategy but also by those chosen by other nodes in the network. A crucial aspect is whether owners of device, e.g., either smartphones, machines or tablets, are willing to be equipped to protect themselves or to take the risk to be contaminated in order to avoid the payment for a new antivirus. We model the interaction between nodes as a non-cooperative games where the node has two strategies: either to update the antivirus or not. To this aim, we provide a full characterization of the equilibria of the game and we investigate the impact of the price of protection on the equilibrium as well as the efficiency of the protection at equilibrium. Further we consider more realistic scenarios in which the dynamic of sources that disseminate the virus, evolves as function of the popularity of virus. In this work, the interest in the virus by sources evolves under the Influence Linear Threshold (HILT) model. |
2107.08909 | Takayuki Miura | Takayuki Miura, Satoshi Hasegawa, Toshiki Shibahara | MEGEX: Data-Free Model Extraction Attack against Gradient-Based
Explainable AI | 10 pages, 5 figures | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | The advance of explainable artificial intelligence, which provides reasons
for its predictions, is expected to accelerate the use of deep neural networks
in the real world like Machine Learning as a Service (MLaaS) that returns
predictions on queried data with the trained model. Deep neural networks
deployed in MLaaS face the threat of model extraction attacks. A model
extraction attack is an attack to violate intellectual property and privacy in
which an adversary steals trained models in a cloud using only their
predictions. In particular, a data-free model extraction attack has been
proposed recently and is more critical. In this attack, an adversary uses a
generative model instead of preparing input data. The feasibility of this
attack, however, needs to be studied since it requires more queries than that
with surrogate datasets. In this paper, we propose MEGEX, a data-free model
extraction attack against a gradient-based explainable AI. In this method, an
adversary uses the explanations to train the generative model and reduces the
number of queries to steal the model. Our experiments show that our proposed
method reconstructs high-accuracy models -- 0.97$\times$ and 0.98$\times$ the
victim model accuracy on SVHN and CIFAR-10 datasets given 2M and 20M queries,
respectively. This implies that there is a trade-off between the
interpretability of models and the difficulty of stealing them.
| [
{
"created": "Mon, 19 Jul 2021 14:25:06 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Miura",
"Takayuki",
""
],
[
"Hasegawa",
"Satoshi",
""
],
[
"Shibahara",
"Toshiki",
""
]
] | The advance of explainable artificial intelligence, which provides reasons for its predictions, is expected to accelerate the use of deep neural networks in the real world like Machine Learning as a Service (MLaaS) that returns predictions on queried data with the trained model. Deep neural networks deployed in MLaaS face the threat of model extraction attacks. A model extraction attack is an attack to violate intellectual property and privacy in which an adversary steals trained models in a cloud using only their predictions. In particular, a data-free model extraction attack has been proposed recently and is more critical. In this attack, an adversary uses a generative model instead of preparing input data. The feasibility of this attack, however, needs to be studied since it requires more queries than that with surrogate datasets. In this paper, we propose MEGEX, a data-free model extraction attack against a gradient-based explainable AI. In this method, an adversary uses the explanations to train the generative model and reduces the number of queries to steal the model. Our experiments show that our proposed method reconstructs high-accuracy models -- 0.97$\times$ and 0.98$\times$ the victim model accuracy on SVHN and CIFAR-10 datasets given 2M and 20M queries, respectively. This implies that there is a trade-off between the interpretability of models and the difficulty of stealing them. |
2005.03358 | Feitong Tan | Feitong Tan, Hao Zhu, Zhaopeng Cui, Siyu Zhu, Marc Pollefeys, Ping Tan | Self-Supervised Human Depth Estimation from Monocular Videos | Accepted by IEEE Conference on Computer Vision and Patten Recognition
(CVPR), 2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous methods on estimating detailed human depth often require supervised
training with `ground truth' depth data. This paper presents a self-supervised
method that can be trained on YouTube videos without known depth, which makes
training data collection simple and improves the generalization of the learned
network. The self-supervised learning is achieved by minimizing a
photo-consistency loss, which is evaluated between a video frame and its
neighboring frames warped according to the estimated depth and the 3D non-rigid
motion of the human body. To solve this non-rigid motion, we first estimate a
rough SMPL model at each video frame and compute the non-rigid body motion
accordingly, which enables self-supervised learning on estimating the shape
details. Experiments demonstrate that our method enjoys better generalization
and performs much better on data in the wild.
| [
{
"created": "Thu, 7 May 2020 09:45:11 GMT",
"version": "v1"
}
] | 2020-05-08 | [
[
"Tan",
"Feitong",
""
],
[
"Zhu",
"Hao",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Zhu",
"Siyu",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Tan",
"Ping",
""
]
] | Previous methods on estimating detailed human depth often require supervised training with `ground truth' depth data. This paper presents a self-supervised method that can be trained on YouTube videos without known depth, which makes training data collection simple and improves the generalization of the learned network. The self-supervised learning is achieved by minimizing a photo-consistency loss, which is evaluated between a video frame and its neighboring frames warped according to the estimated depth and the 3D non-rigid motion of the human body. To solve this non-rigid motion, we first estimate a rough SMPL model at each video frame and compute the non-rigid body motion accordingly, which enables self-supervised learning on estimating the shape details. Experiments demonstrate that our method enjoys better generalization and performs much better on data in the wild. |
2107.05582 | Ilias Diakonikolas | Ilias Diakonikolas and Daniel M. Kane and Christos Tzamos | Forster Decomposition and Learning Halfspaces with Noise | null | null | null | null | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Forster transform is an operation that turns a distribution into one with
good anti-concentration properties. While a Forster transform does not always
exist, we show that any distribution can be efficiently decomposed as a
disjoint mixture of few distributions for which a Forster transform exists and
can be computed efficiently. As the main application of this result, we obtain
the first polynomial-time algorithm for distribution-independent PAC learning
of halfspaces in the Massart noise model with strongly polynomial sample
complexity, i.e., independent of the bit complexity of the examples. Previous
algorithms for this learning problem incurred sample complexity scaling
polynomially with the bit complexity, even though such a dependence is not
information-theoretically necessary.
| [
{
"created": "Mon, 12 Jul 2021 17:00:59 GMT",
"version": "v1"
}
] | 2021-07-13 | [
[
"Diakonikolas",
"Ilias",
""
],
[
"Kane",
"Daniel M.",
""
],
[
"Tzamos",
"Christos",
""
]
] | A Forster transform is an operation that turns a distribution into one with good anti-concentration properties. While a Forster transform does not always exist, we show that any distribution can be efficiently decomposed as a disjoint mixture of few distributions for which a Forster transform exists and can be computed efficiently. As the main application of this result, we obtain the first polynomial-time algorithm for distribution-independent PAC learning of halfspaces in the Massart noise model with strongly polynomial sample complexity, i.e., independent of the bit complexity of the examples. Previous algorithms for this learning problem incurred sample complexity scaling polynomially with the bit complexity, even though such a dependence is not information-theoretically necessary. |
2212.08665 | Yue Liu | Yue Liu, Xihong Yang, Sihang Zhou, Xinwang Liu, Zhen Wang, Ke Liang,
Wenxuan Tu, Liang Li, Jingcan Duan, Cancan Chen | Hard Sample Aware Network for Contrastive Deep Graph Clustering | add appendix | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive deep graph clustering, which aims to divide nodes into disjoint
groups via contrastive mechanisms, is a challenging research spot. Among the
recent works, hard sample mining-based algorithms have achieved great attention
for their promising performance. However, we find that the existing hard sample
mining methods have two problems as follows. 1) In the hardness measurement,
the important structural information is overlooked for similarity calculation,
degrading the representativeness of the selected hard negative samples. 2)
Previous works merely focus on the hard negative sample pairs while neglecting
the hard positive sample pairs. Nevertheless, samples within the same cluster
but with low similarity should also be carefully learned. To solve the
problems, we propose a novel contrastive deep graph clustering method dubbed
Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity
measure criterion and a general dynamic sample weighing strategy. Concretely,
in our algorithm, the similarities between samples are calculated by
considering both the attribute embeddings and the structure embeddings, better
revealing sample relationships and assisting hardness measurement. Moreover,
under the guidance of the carefully collected high-confidence clustering
information, our proposed weight modulating function will first recognize the
positive and negative samples and then dynamically up-weight the hard sample
pairs while down-weighting the easy ones. In this way, our method can mine not
only the hard negative samples but also the hard positive sample, thus
improving the discriminative capability of the samples further. Extensive
experiments and analyses demonstrate the superiority and effectiveness of our
proposed method.
| [
{
"created": "Fri, 16 Dec 2022 16:57:37 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Dec 2022 05:33:18 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Jan 2023 09:25:10 GMT",
"version": "v3"
}
] | 2023-01-31 | [
[
"Liu",
"Yue",
""
],
[
"Yang",
"Xihong",
""
],
[
"Zhou",
"Sihang",
""
],
[
"Liu",
"Xinwang",
""
],
[
"Wang",
"Zhen",
""
],
[
"Liang",
"Ke",
""
],
[
"Tu",
"Wenxuan",
""
],
[
"Li",
"Liang",
""
],
[
"Duan",
"Jingcan",
""
],
[
"Chen",
"Cancan",
""
]
] | Contrastive deep graph clustering, which aims to divide nodes into disjoint groups via contrastive mechanisms, is a challenging research spot. Among the recent works, hard sample mining-based algorithms have achieved great attention for their promising performance. However, we find that the existing hard sample mining methods have two problems as follows. 1) In the hardness measurement, the important structural information is overlooked for similarity calculation, degrading the representativeness of the selected hard negative samples. 2) Previous works merely focus on the hard negative sample pairs while neglecting the hard positive sample pairs. Nevertheless, samples within the same cluster but with low similarity should also be carefully learned. To solve the problems, we propose a novel contrastive deep graph clustering method dubbed Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity measure criterion and a general dynamic sample weighing strategy. Concretely, in our algorithm, the similarities between samples are calculated by considering both the attribute embeddings and the structure embeddings, better revealing sample relationships and assisting hardness measurement. Moreover, under the guidance of the carefully collected high-confidence clustering information, our proposed weight modulating function will first recognize the positive and negative samples and then dynamically up-weight the hard sample pairs while down-weighting the easy ones. In this way, our method can mine not only the hard negative samples but also the hard positive sample, thus improving the discriminative capability of the samples further. Extensive experiments and analyses demonstrate the superiority and effectiveness of our proposed method. |
2110.09974 | Meirui Jiang | Meirui Jiang, Xiaoxiao Li, Xiaofei Zhang, Michael Kamp, Qi Dou | UniFed: A Unified Framework for Federated Learning on Non-IID Image
Features | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to tackle non-iid data is a crucial topic in federated learning. This
challenging problem not only affects training process, but also harms
performance of clients not participating in training. Existing literature
mainly focuses on either side, yet still lacks a unified solution to handle
these two types (internal and external) of clients in a joint way. In this
work, we propose a unified framework to tackle the non-iid issues for internal
and external clients together. Firstly, we propose to use client-specific batch
normalization in either internal or external clients to alleviate feature
distribution shifts incurred by non-iid data. Then we present theoretical
analysis to demonstrate the benefits of client-specific batch normalization.
Specifically, we show that our approach promotes convergence speed for
federated training and yields lower generalization error bound for external
clients. Furthermore, we use causal reasoning to form a causal view to explain
the advantages of our framework. At last, we conduct extensive experiments on
natural and medical images to evaluate our method, where our method achieves
state-of-the-art performance, faster convergence, and shows good compatibility.
We also performed comprehensive analytical studies on a real-world medical
dataset to demonstrate the effectiveness.
| [
{
"created": "Tue, 19 Oct 2021 13:46:37 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Dec 2021 13:32:55 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Feb 2023 08:00:50 GMT",
"version": "v3"
}
] | 2023-02-17 | [
[
"Jiang",
"Meirui",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Zhang",
"Xiaofei",
""
],
[
"Kamp",
"Michael",
""
],
[
"Dou",
"Qi",
""
]
] | How to tackle non-iid data is a crucial topic in federated learning. This challenging problem not only affects training process, but also harms performance of clients not participating in training. Existing literature mainly focuses on either side, yet still lacks a unified solution to handle these two types (internal and external) of clients in a joint way. In this work, we propose a unified framework to tackle the non-iid issues for internal and external clients together. Firstly, we propose to use client-specific batch normalization in either internal or external clients to alleviate feature distribution shifts incurred by non-iid data. Then we present theoretical analysis to demonstrate the benefits of client-specific batch normalization. Specifically, we show that our approach promotes convergence speed for federated training and yields lower generalization error bound for external clients. Furthermore, we use causal reasoning to form a causal view to explain the advantages of our framework. At last, we conduct extensive experiments on natural and medical images to evaluate our method, where our method achieves state-of-the-art performance, faster convergence, and shows good compatibility. We also performed comprehensive analytical studies on a real-world medical dataset to demonstrate the effectiveness. |
2002.00118 | Xingzhe He | Xingzhe He, Helen Lu Cao, Bo Zhu | AdvectiveNet: An Eulerian-Lagrangian Fluidic reservoir for Point Cloud
Processing | ICLR 2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel physics-inspired deep learning approach for point
cloud processing motivated by the natural flow phenomena in fluid mechanics.
Our learning architecture jointly defines data in an Eulerian world space,
using a static background grid, and a Lagrangian material space, using moving
particles. By introducing this Eulerian-Lagrangian representation, we are able
to naturally evolve and accumulate particle features using flow velocities
generated from a generalized, high-dimensional force field. We demonstrate the
efficacy of this system by solving various point cloud classification and
segmentation problems with state-of-the-art performance. The entire geometric
reservoir and data flow mimics the pipeline of the classic PIC/FLIP scheme in
modeling natural flow, bridging the disciplines of geometric machine learning
and physical simulation.
| [
{
"created": "Sat, 1 Feb 2020 01:21:05 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Feb 2020 01:33:56 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jun 2020 19:44:09 GMT",
"version": "v3"
}
] | 2020-06-26 | [
[
"He",
"Xingzhe",
""
],
[
"Cao",
"Helen Lu",
""
],
[
"Zhu",
"Bo",
""
]
] | This paper presents a novel physics-inspired deep learning approach for point cloud processing motivated by the natural flow phenomena in fluid mechanics. Our learning architecture jointly defines data in an Eulerian world space, using a static background grid, and a Lagrangian material space, using moving particles. By introducing this Eulerian-Lagrangian representation, we are able to naturally evolve and accumulate particle features using flow velocities generated from a generalized, high-dimensional force field. We demonstrate the efficacy of this system by solving various point cloud classification and segmentation problems with state-of-the-art performance. The entire geometric reservoir and data flow mimics the pipeline of the classic PIC/FLIP scheme in modeling natural flow, bridging the disciplines of geometric machine learning and physical simulation. |
2408.03603 | Jiahao Zhang | Jiahao Zhang, Zilong Wang, Ruofan Wang, Xingjun Ma and Yu-Gang Jiang | EnJa: Ensemble Jailbreak on Large Language Models | null | null | null | null | cs.CR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As Large Language Models (LLMs) are increasingly being deployed in
safety-critical applications, their vulnerability to potential jailbreaks --
malicious prompts that can disable the safety mechanism of LLMs -- has
attracted growing research attention. While alignment methods have been
proposed to protect LLMs from jailbreaks, many have found that aligned LLMs can
still be jailbroken by carefully crafted malicious prompts, producing content
that violates policy regulations. Existing jailbreak attacks on LLMs can be
categorized into prompt-level methods which make up stories/logic to circumvent
safety alignment and token-level attack methods which leverage gradient methods
to find adversarial tokens. In this work, we introduce the concept of Ensemble
Jailbreak and explore methods that can integrate prompt-level and token-level
jailbreak into a more powerful hybrid jailbreak attack. Specifically, we
propose a novel EnJa attack to hide harmful instructions using prompt-level
jailbreak, boost the attack success rate using a gradient-based attack, and
connect the two types of jailbreak attacks via a template-based connector. We
evaluate the effectiveness of EnJa on several aligned models and show that it
achieves a state-of-the-art attack success rate with fewer queries and is much
stronger than any individual jailbreak.
| [
{
"created": "Wed, 7 Aug 2024 07:46:08 GMT",
"version": "v1"
}
] | 2024-08-08 | [
[
"Zhang",
"Jiahao",
""
],
[
"Wang",
"Zilong",
""
],
[
"Wang",
"Ruofan",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | As Large Language Models (LLMs) are increasingly being deployed in safety-critical applications, their vulnerability to potential jailbreaks -- malicious prompts that can disable the safety mechanism of LLMs -- has attracted growing research attention. While alignment methods have been proposed to protect LLMs from jailbreaks, many have found that aligned LLMs can still be jailbroken by carefully crafted malicious prompts, producing content that violates policy regulations. Existing jailbreak attacks on LLMs can be categorized into prompt-level methods which make up stories/logic to circumvent safety alignment and token-level attack methods which leverage gradient methods to find adversarial tokens. In this work, we introduce the concept of Ensemble Jailbreak and explore methods that can integrate prompt-level and token-level jailbreak into a more powerful hybrid jailbreak attack. Specifically, we propose a novel EnJa attack to hide harmful instructions using prompt-level jailbreak, boost the attack success rate using a gradient-based attack, and connect the two types of jailbreak attacks via a template-based connector. We evaluate the effectiveness of EnJa on several aligned models and show that it achieves a state-of-the-art attack success rate with fewer queries and is much stronger than any individual jailbreak. |
2002.03830 | David W. Romero | David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn | Attentive Group Equivariant Convolutional Networks | Proceedings of the 37th International Conference on Machine Learning
(ICML), 2020 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although group convolutional networks are able to learn powerful
representations based on symmetry patterns, they lack explicit means to learn
meaningful relationships among them (e.g., relative positions and poses). In
this paper, we present attentive group equivariant convolutions, a
generalization of the group convolution, in which attention is applied during
the course of convolution to accentuate meaningful symmetry combinations and
suppress non-plausible, misleading ones. We indicate that prior work on visual
attention can be described as special cases of our proposed framework and show
empirically that our attentive group equivariant convolutional networks
consistently outperform conventional group convolutional networks on benchmark
image datasets. Simultaneously, we provide interpretability to the learned
concepts through the visualization of equivariant attention maps.
| [
{
"created": "Fri, 7 Feb 2020 14:06:24 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Feb 2020 12:34:17 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jun 2020 07:41:35 GMT",
"version": "v3"
}
] | 2020-07-01 | [
[
"Romero",
"David W.",
""
],
[
"Bekkers",
"Erik J.",
""
],
[
"Tomczak",
"Jakub M.",
""
],
[
"Hoogendoorn",
"Mark",
""
]
] | Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e.g., relative positions and poses). In this paper, we present attentive group equivariant convolutions, a generalization of the group convolution, in which attention is applied during the course of convolution to accentuate meaningful symmetry combinations and suppress non-plausible, misleading ones. We indicate that prior work on visual attention can be described as special cases of our proposed framework and show empirically that our attentive group equivariant convolutional networks consistently outperform conventional group convolutional networks on benchmark image datasets. Simultaneously, we provide interpretability to the learned concepts through the visualization of equivariant attention maps. |
2302.09688 | Daniel Karl I. Weidele | Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch,
Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair,
Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine
Franke, Daniel Haehn | AutoDOViz: Human-Centered Automation for Decision Optimization | null | null | 10.1145/3581641.3584094 | null | cs.HC cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present AutoDOViz, an interactive user interface for automated decision
optimization (AutoDO) using reinforcement learning (RL). Decision optimization
(DO) has classically being practiced by dedicated DO researchers where experts
need to spend long periods of time fine tuning a solution through
trial-and-error. AutoML pipeline search has sought to make it easier for a data
scientist to find the best machine learning pipeline by leveraging automation
to search and tune the solution. More recently, these advances have been
applied to the domain of AutoDO, with a similar goal to find the best
reinforcement learning pipeline through algorithm selection and parameter
tuning. However, Decision Optimization requires significantly more complex
problem specification when compared to an ML problem. AutoDOViz seeks to lower
the barrier of entry for data scientists in problem specification for
reinforcement learning problems, leverage the benefits of AutoDO algorithms for
RL pipeline search and finally, create visualizations and policy insights in
order to facilitate the typical interactive nature when communicating problem
formulation and solution proposals between DO experts and domain experts. In
this paper, we report our findings from semi-structured expert interviews with
DO practitioners as well as business consultants, leading to design
requirements for human-centered automation for DO with RL. We evaluate a system
implementation with data scientists and find that they are significantly more
open to engage in DO after using our proposed solution. AutoDOViz further
increases trust in RL agent models and makes the automated training and
evaluation process more comprehensible. As shown for other automation in ML
tasks, we also conclude automation of RL for DO can benefit from user and
vice-versa when the interface promotes human-in-the-loop.
| [
{
"created": "Sun, 19 Feb 2023 23:06:19 GMT",
"version": "v1"
}
] | 2023-02-21 | [
[
"Weidele",
"Daniel Karl I.",
""
],
[
"Afzal",
"Shazia",
""
],
[
"Valente",
"Abel N.",
""
],
[
"Makuch",
"Cole",
""
],
[
"Cornec",
"Owen",
""
],
[
"Vu",
"Long",
""
],
[
"Subramanian",
"Dharmashankar",
""
],
[
"Geyer",
"Werner",
""
],
[
"Nair",
"Rahul",
""
],
[
"Vejsbjerg",
"Inge",
""
],
[
"Marinescu",
"Radu",
""
],
[
"Palmes",
"Paulito",
""
],
[
"Daly",
"Elizabeth M.",
""
],
[
"Franke",
"Loraine",
""
],
[
"Haehn",
"Daniel",
""
]
] | We present AutoDOViz, an interactive user interface for automated decision optimization (AutoDO) using reinforcement learning (RL). Decision optimization (DO) has classically being practiced by dedicated DO researchers where experts need to spend long periods of time fine tuning a solution through trial-and-error. AutoML pipeline search has sought to make it easier for a data scientist to find the best machine learning pipeline by leveraging automation to search and tune the solution. More recently, these advances have been applied to the domain of AutoDO, with a similar goal to find the best reinforcement learning pipeline through algorithm selection and parameter tuning. However, Decision Optimization requires significantly more complex problem specification when compared to an ML problem. AutoDOViz seeks to lower the barrier of entry for data scientists in problem specification for reinforcement learning problems, leverage the benefits of AutoDO algorithms for RL pipeline search and finally, create visualizations and policy insights in order to facilitate the typical interactive nature when communicating problem formulation and solution proposals between DO experts and domain experts. In this paper, we report our findings from semi-structured expert interviews with DO practitioners as well as business consultants, leading to design requirements for human-centered automation for DO with RL. We evaluate a system implementation with data scientists and find that they are significantly more open to engage in DO after using our proposed solution. AutoDOViz further increases trust in RL agent models and makes the automated training and evaluation process more comprehensible. As shown for other automation in ML tasks, we also conclude automation of RL for DO can benefit from user and vice-versa when the interface promotes human-in-the-loop. |
2206.12055 | Peng-Shuai Wang | Xin-Yang Zheng and Yang Liu and Peng-Shuai Wang and Xin Tong | SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation | Accepted to Computer Graphics Forum (SGP), 2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a StyleGAN2-based deep learning approach for 3D shape generation,
called SDF-StyleGAN, with the aim of reducing visual and geometric
dissimilarity between generated shapes and a shape collection. We extend
StyleGAN2 to 3D generation and utilize the implicit signed distance function
(SDF) as the 3D shape representation, and introduce two novel global and local
shape discriminators that distinguish real and fake SDF values and gradients to
significantly improve shape geometry and visual quality. We further complement
the evaluation metrics of 3D generative models with the shading-image-based
Fr\'echet inception distance (FID) scores to better assess visual quality and
shape distribution of the generated shapes. Experiments on shape generation
demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art.
We further demonstrate the efficacy of SDF-StyleGAN in various tasks based on
GAN inversion, including shape reconstruction, shape completion from partial
point clouds, single-view image-based shape generation, and shape style
editing. Extensive ablation studies justify the efficacy of our framework
design. Our code and trained models are available at
https://github.com/Zhengxinyang/SDF-StyleGAN.
| [
{
"created": "Fri, 24 Jun 2022 03:11:28 GMT",
"version": "v1"
}
] | 2022-06-27 | [
[
"Zheng",
"Xin-Yang",
""
],
[
"Liu",
"Yang",
""
],
[
"Wang",
"Peng-Shuai",
""
],
[
"Tong",
"Xin",
""
]
] | We present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fr\'echet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDF-StyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN. |
2308.07162 | David Guillermo Fajardo Ortiz Dr. | David Fajardo-Ortiz, Bart Thijs, Wolfgang Glanzel, Karin R. Sipido | Evolution of priorities in strategic funding for collaborative health
research. A comparison of the European Union Framework Programmes to the
program funding by the United States National Institutes of Health | null | null | null | null | cs.DL | http://creativecommons.org/licenses/by/4.0/ | The historical research-funding model, based on the curiosity and academic
interests of researchers, is giving way to new strategic funding models that
seek to meet societal needs. We investigated the impact of this trend on health
research funded by the two leading funding bodies worldwide, i.e. the National
Institutes of Health (NIH) in the United States, and the framework programs of
the European Union (EU). To this end, we performed a quantitative analysis of
the content of projects supported through programmatic funding by the EU and
NIH, in the period 2008-2014 and 2015-2020. We used machine learning for
classification of projects as basic biomedical research, or as more
implementation directed clinical therapeutic research, diagnostics research,
population research, or policy and management research. In addition, we
analyzed funding for major disease areas (cancer, cardio-metabolic and
infectious disease). We found that EU collaborative health research projects
clearly shifted towards more implementation research. In the US, the recently
implemented UM1 program has a similar profile with strong clinical therapeutic
research, while other NIH programs remain heavily oriented to basic biomedical
research. Funding for cancer research is present across all NIH and EU
programs, and in biomedical as well as more implementation directed projects,
while infectious diseases is an emerging theme. We conclude that demand for
solutions for medical needs leads to expanded funding for implementation- and
impact-oriented research. Basic biomedical research remains present in programs
driven by scientific initiative and strategies based on excellence, but may be
at risk of declining funding opportunities.
| [
{
"created": "Mon, 14 Aug 2023 14:17:34 GMT",
"version": "v1"
}
] | 2023-08-15 | [
[
"Fajardo-Ortiz",
"David",
""
],
[
"Thijs",
"Bart",
""
],
[
"Glanzel",
"Wolfgang",
""
],
[
"Sipido",
"Karin R.",
""
]
] | The historical research-funding model, based on the curiosity and academic interests of researchers, is giving way to new strategic funding models that seek to meet societal needs. We investigated the impact of this trend on health research funded by the two leading funding bodies worldwide, i.e. the National Institutes of Health (NIH) in the United States, and the framework programs of the European Union (EU). To this end, we performed a quantitative analysis of the content of projects supported through programmatic funding by the EU and NIH, in the period 2008-2014 and 2015-2020. We used machine learning for classification of projects as basic biomedical research, or as more implementation directed clinical therapeutic research, diagnostics research, population research, or policy and management research. In addition, we analyzed funding for major disease areas (cancer, cardio-metabolic and infectious disease). We found that EU collaborative health research projects clearly shifted towards more implementation research. In the US, the recently implemented UM1 program has a similar profile with strong clinical therapeutic research, while other NIH programs remain heavily oriented to basic biomedical research. Funding for cancer research is present across all NIH and EU programs, and in biomedical as well as more implementation directed projects, while infectious diseases is an emerging theme. We conclude that demand for solutions for medical needs leads to expanded funding for implementation- and impact-oriented research. Basic biomedical research remains present in programs driven by scientific initiative and strategies based on excellence, but may be at risk of declining funding opportunities. |
2404.01869 | Philipp Mondorf | Philipp Mondorf and Barbara Plank | Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language
Models -- A Survey | COLM 2024, 27 pages, 2 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Large language models (LLMs) have recently shown impressive performance on
tasks involving reasoning, leading to a lively debate on whether these models
possess reasoning capabilities similar to humans. However, despite these
successes, the depth of LLMs' reasoning abilities remains uncertain. This
uncertainty partly stems from the predominant focus on task performance,
measured through shallow accuracy metrics, rather than a thorough investigation
of the models' reasoning behavior. This paper seeks to address this gap by
providing a comprehensive review of studies that go beyond task accuracy,
offering deeper insights into the models' reasoning processes. Furthermore, we
survey prevalent methodologies to evaluate the reasoning behavior of LLMs,
emphasizing current trends and efforts towards more nuanced reasoning analyses.
Our review suggests that LLMs tend to rely on surface-level patterns and
correlations in their training data, rather than on sophisticated reasoning
abilities. Additionally, we identify the need for further research that
delineates the key differences between human and LLM-based reasoning. Through
this survey, we aim to shed light on the complex reasoning processes within
LLMs.
| [
{
"created": "Tue, 2 Apr 2024 11:46:31 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2024 11:58:53 GMT",
"version": "v2"
}
] | 2024-08-07 | [
[
"Mondorf",
"Philipp",
""
],
[
"Plank",
"Barbara",
""
]
] | Large language models (LLMs) have recently shown impressive performance on tasks involving reasoning, leading to a lively debate on whether these models possess reasoning capabilities similar to humans. However, despite these successes, the depth of LLMs' reasoning abilities remains uncertain. This uncertainty partly stems from the predominant focus on task performance, measured through shallow accuracy metrics, rather than a thorough investigation of the models' reasoning behavior. This paper seeks to address this gap by providing a comprehensive review of studies that go beyond task accuracy, offering deeper insights into the models' reasoning processes. Furthermore, we survey prevalent methodologies to evaluate the reasoning behavior of LLMs, emphasizing current trends and efforts towards more nuanced reasoning analyses. Our review suggests that LLMs tend to rely on surface-level patterns and correlations in their training data, rather than on sophisticated reasoning abilities. Additionally, we identify the need for further research that delineates the key differences between human and LLM-based reasoning. Through this survey, we aim to shed light on the complex reasoning processes within LLMs. |
2403.09634 | Lingyi Hong | Lingyi Hong, Shilin Yan, Renrui Zhang, Wanyun Li, Xinyu Zhou, Pinxue
Guo, Kaixun Jiang, Yiting Chen, Jinglun Li, Zhaoyu Chen, Wenqiang Zhang | OneTracker: Unifying Visual Object Tracking with Foundation Models and
Efficient Tuning | Accepted to CVPR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual object tracking aims to localize the target object of each frame based
on its initial appearance in the first frame. Depending on the input modility,
tracking tasks can be divided into RGB tracking and RGB+X (e.g. RGB+N, and
RGB+D) tracking. Despite the different input modalities, the core aspect of
tracking is the temporal matching. Based on this common ground, we present a
general framework to unify various tracking tasks, termed as OneTracker.
OneTracker first performs a large-scale pre-training on a RGB tracker called
Foundation Tracker. This pretraining phase equips the Foundation Tracker with a
stable ability to estimate the location of the target object. Then we regard
other modality information as prompt and build Prompt Tracker upon Foundation
Tracker. Through freezing the Foundation Tracker and only adjusting some
additional trainable parameters, Prompt Tracker inhibits the strong
localization ability from Foundation Tracker and achieves parameter-efficient
finetuning on downstream RGB+X tracking tasks. To evaluate the effectiveness of
our general framework OneTracker, which is consisted of Foundation Tracker and
Prompt Tracker, we conduct extensive experiments on 6 popular tracking tasks
across 11 benchmarks and our OneTracker outperforms other models and achieves
state-of-the-art performance.
| [
{
"created": "Thu, 14 Mar 2024 17:59:13 GMT",
"version": "v1"
}
] | 2024-03-15 | [
[
"Hong",
"Lingyi",
""
],
[
"Yan",
"Shilin",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Li",
"Wanyun",
""
],
[
"Zhou",
"Xinyu",
""
],
[
"Guo",
"Pinxue",
""
],
[
"Jiang",
"Kaixun",
""
],
[
"Chen",
"Yiting",
""
],
[
"Li",
"Jinglun",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Zhang",
"Wenqiang",
""
]
] | Visual object tracking aims to localize the target object of each frame based on its initial appearance in the first frame. Depending on the input modility, tracking tasks can be divided into RGB tracking and RGB+X (e.g. RGB+N, and RGB+D) tracking. Despite the different input modalities, the core aspect of tracking is the temporal matching. Based on this common ground, we present a general framework to unify various tracking tasks, termed as OneTracker. OneTracker first performs a large-scale pre-training on a RGB tracker called Foundation Tracker. This pretraining phase equips the Foundation Tracker with a stable ability to estimate the location of the target object. Then we regard other modality information as prompt and build Prompt Tracker upon Foundation Tracker. Through freezing the Foundation Tracker and only adjusting some additional trainable parameters, Prompt Tracker inhibits the strong localization ability from Foundation Tracker and achieves parameter-efficient finetuning on downstream RGB+X tracking tasks. To evaluate the effectiveness of our general framework OneTracker, which is consisted of Foundation Tracker and Prompt Tracker, we conduct extensive experiments on 6 popular tracking tasks across 11 benchmarks and our OneTracker outperforms other models and achieves state-of-the-art performance. |
1306.0195 | Prateek Dewan | Prateek Dewan, Niharika Sachdeva, Mayank Gupta, Ponnurangam Kumaraguru | ChaMAILeon: Exploring the Usability of a Privacy Preserving Email
Sharing System | 12 pages without references and appendices | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While passwords, by definition, are meant to be secret, recent trends have
witnessed an increasing number of people sharing their email passwords with
friends, colleagues, and significant others. However, leading websites like
Google advise their users not to share their passwords with anyone, to avoid
security and privacy breaches. To understand users' general password sharing
behavior and practices, we conducted an online survey with 209 Indian
participants and found that 64.35% of the participants felt a need to share
their email passwords. Further, about 77% of the participants said that they
would want to use a system which could provide them access control features, to
maintain their privacy while sharing emails. To address the privacy concerns of
users who need to share emails, we propose ChaMAILeon, a system which enables
users to share their email passwords while maintaining their privacy.
ChaMAILeon allows users to create multiple passwords for their email account.
Each such password corresponds to a different set of access control rules, and
gives a different view of the same email account. We conducted a controlled
experiment with 30 participants to evaluate the usability of the system. Each
participant was required to perform 5 tasks. Each task corresponded to
different access control rules, which the participant was required to set, for
a dummy email account. We found that, with a reasonable number of multiple
attempts, all 30 participants were able to perform all 5 tasks given to them.
The system usability score was found out to be 75.42. Moreover, 56.6% of the
participants said that they would like to use ChaMAILeon frequently.
| [
{
"created": "Sun, 2 Jun 2013 11:23:27 GMT",
"version": "v1"
}
] | 2013-06-04 | [
[
"Dewan",
"Prateek",
""
],
[
"Sachdeva",
"Niharika",
""
],
[
"Gupta",
"Mayank",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] | While passwords, by definition, are meant to be secret, recent trends have witnessed an increasing number of people sharing their email passwords with friends, colleagues, and significant others. However, leading websites like Google advise their users not to share their passwords with anyone, to avoid security and privacy breaches. To understand users' general password sharing behavior and practices, we conducted an online survey with 209 Indian participants and found that 64.35% of the participants felt a need to share their email passwords. Further, about 77% of the participants said that they would want to use a system which could provide them access control features, to maintain their privacy while sharing emails. To address the privacy concerns of users who need to share emails, we propose ChaMAILeon, a system which enables users to share their email passwords while maintaining their privacy. ChaMAILeon allows users to create multiple passwords for their email account. Each such password corresponds to a different set of access control rules, and gives a different view of the same email account. We conducted a controlled experiment with 30 participants to evaluate the usability of the system. Each participant was required to perform 5 tasks. Each task corresponded to different access control rules, which the participant was required to set, for a dummy email account. We found that, with a reasonable number of multiple attempts, all 30 participants were able to perform all 5 tasks given to them. The system usability score was found out to be 75.42. Moreover, 56.6% of the participants said that they would like to use ChaMAILeon frequently. |
1502.04868 | Rafael Boloix-Tortosa | Rafael Boloix-Tortosa, F. Javier Pay\'an-Somet, Eva Arias-de-Reyna and
Juan Jos\'e Murillo-Fuentes | Proper Complex Gaussian Processes for Regression | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex-valued signals are used in the modeling of many systems in
engineering and science, hence being of fundamental interest. Often, random
complex-valued signals are considered to be proper. A proper complex random
variable or process is uncorrelated with its complex conjugate. This assumption
is a good model of the underlying physics in many problems, and simplifies the
computations. While linear processing and neural networks have been widely
studied for these signals, the development of complex-valued nonlinear kernel
approaches remains an open problem. In this paper we propose Gaussian processes
for regression as a framework to develop 1) a solution for proper
complex-valued kernel regression and 2) the design of the reproducing kernel
for complex-valued inputs, using the convolutional approach for
cross-covariances. In this design we pay attention to preserve, in the complex
domain, the measure of similarity between near inputs. The hyperparameters of
the kernel are learned maximizing the marginal likelihood using Wirtinger
derivatives. Besides, the approach is connected to the multiple output learning
scenario. In the experiments included, we first solve a proper complex Gaussian
process where the cross-covariance does not cancel, a challenging scenario when
dealing with proper complex signals. Then we successfully use these novel
results to solve some problems previously proposed in the literature as
benchmarks, reporting a remarkable improvement in the estimation error.
| [
{
"created": "Tue, 17 Feb 2015 11:59:44 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Feb 2015 09:33:34 GMT",
"version": "v2"
}
] | 2015-02-19 | [
[
"Boloix-Tortosa",
"Rafael",
""
],
[
"Payán-Somet",
"F. Javier",
""
],
[
"Arias-de-Reyna",
"Eva",
""
],
[
"Murillo-Fuentes",
"Juan José",
""
]
] | Complex-valued signals are used in the modeling of many systems in engineering and science, hence being of fundamental interest. Often, random complex-valued signals are considered to be proper. A proper complex random variable or process is uncorrelated with its complex conjugate. This assumption is a good model of the underlying physics in many problems, and simplifies the computations. While linear processing and neural networks have been widely studied for these signals, the development of complex-valued nonlinear kernel approaches remains an open problem. In this paper we propose Gaussian processes for regression as a framework to develop 1) a solution for proper complex-valued kernel regression and 2) the design of the reproducing kernel for complex-valued inputs, using the convolutional approach for cross-covariances. In this design we pay attention to preserve, in the complex domain, the measure of similarity between near inputs. The hyperparameters of the kernel are learned maximizing the marginal likelihood using Wirtinger derivatives. Besides, the approach is connected to the multiple output learning scenario. In the experiments included, we first solve a proper complex Gaussian process where the cross-covariance does not cancel, a challenging scenario when dealing with proper complex signals. Then we successfully use these novel results to solve some problems previously proposed in the literature as benchmarks, reporting a remarkable improvement in the estimation error. |
cs/0404047 | Gianluca Argentini | Gianluca Argentini | Using matrices in post-processing phase of CFD simulations | Paper based on presentation-talk at SCICOMP9, Bologna (Italy), March
23-26, 2004; workshop organized by IBM, CINECA (Italy) (dr. Sigismondo
Boschi, dr. Giovanni Erbacci), NERSC-DOE (USA) (dr. David Skinner), web site:
www.spscicomp.org ; main topics: Computational Fluid Dynamics | Progress in Industrial Mathematics at ECMI 2004 - Eindhoven
(Netherlands), Springer, 2005 | null | null | cs.NA cs.DC physics.comp-ph | null | In this work I present a technique of construction and fast evaluation of a
family of cubic polynomials for analytic smoothing and graphical rendering of
particles trajectories for flows in a generic geometry. The principal result of
the work was implementation and test of a method for interpolating 3D points by
regular parametric curves and their fast and efficient evaluation for a good
resolution of rendering. For the purpose I have used a parallel environment
using a multiprocessor cluster architecture. The efficiency of the used method
is good, mainly reducing the number of floating-points computations by caching
the numerical values of some line-parameter's powers, and reducing the
necessity of communication among processes. This work has been developed for
the Research and Development Department of my company for planning advanced
customized models of industrial burners.
| [
{
"created": "Thu, 22 Apr 2004 15:33:52 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Argentini",
"Gianluca",
""
]
] | In this work I present a technique of construction and fast evaluation of a family of cubic polynomials for analytic smoothing and graphical rendering of particles trajectories for flows in a generic geometry. The principal result of the work was implementation and test of a method for interpolating 3D points by regular parametric curves and their fast and efficient evaluation for a good resolution of rendering. For the purpose I have used a parallel environment using a multiprocessor cluster architecture. The efficiency of the used method is good, mainly reducing the number of floating-points computations by caching the numerical values of some line-parameter's powers, and reducing the necessity of communication among processes. This work has been developed for the Research and Development Department of my company for planning advanced customized models of industrial burners. |
2010.12532 | Nicole Peinelt | Nicole Peinelt, Marek Rei and Maria Liakata | GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight
Gated Injection Method | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pre-trained language models such as BERT have been the driving force
behind recent improvements across many NLP tasks. However, BERT is only trained
to predict missing words - either behind masks or in the next sentence - and
has no knowledge of lexical, syntactic or semantic information beyond what it
picks up through unsupervised pre-training. We propose a novel method to
explicitly inject linguistic knowledge in the form of word embeddings into any
layer of a pre-trained BERT. Our performance improvements on multiple semantic
similarity datasets when injecting dependency-based and counter-fitted
embeddings indicate that such information is beneficial and currently missing
from the original model. Our qualitative analysis shows that counter-fitted
embedding injection particularly helps with cases involving synonym pairs.
| [
{
"created": "Fri, 23 Oct 2020 17:00:26 GMT",
"version": "v1"
}
] | 2020-10-26 | [
[
"Peinelt",
"Nicole",
""
],
[
"Rei",
"Marek",
""
],
[
"Liakata",
"Maria",
""
]
] | Large pre-trained language models such as BERT have been the driving force behind recent improvements across many NLP tasks. However, BERT is only trained to predict missing words - either behind masks or in the next sentence - and has no knowledge of lexical, syntactic or semantic information beyond what it picks up through unsupervised pre-training. We propose a novel method to explicitly inject linguistic knowledge in the form of word embeddings into any layer of a pre-trained BERT. Our performance improvements on multiple semantic similarity datasets when injecting dependency-based and counter-fitted embeddings indicate that such information is beneficial and currently missing from the original model. Our qualitative analysis shows that counter-fitted embedding injection particularly helps with cases involving synonym pairs. |
2404.16748 | Junting Dong | Junting Dong, Qi Fang, Zehuan Huang, Xudong Xu, Jingbo Wang, Sida
Peng, Bo Dai | TELA: Text to Layer-wise 3D Clothed Human Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the task of 3D clothed human generation from textural
descriptions. Previous works usually encode the human body and clothes as a
holistic model and generate the whole model in a single-stage optimization,
which makes them struggle for clothing editing and meanwhile lose fine-grained
control over the whole generation process. To solve this, we propose a
layer-wise clothed human representation combined with a progressive
optimization strategy, which produces clothing-disentangled 3D human models
while providing control capacity for the generation process. The basic idea is
progressively generating a minimal-clothed human body and layer-wise clothes.
During clothing generation, a novel stratified compositional rendering method
is proposed to fuse multi-layer human models, and a new loss function is
utilized to help decouple the clothing model from the human body. The proposed
method achieves high-quality disentanglement, which thereby provides an
effective way for 3D garment generation. Extensive experiments demonstrate that
our approach achieves state-of-the-art 3D clothed human generation while also
supporting cloth editing applications such as virtual try-on. Project page:
http://jtdong.com/tela_layer/
| [
{
"created": "Thu, 25 Apr 2024 17:05:38 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Dong",
"Junting",
""
],
[
"Fang",
"Qi",
""
],
[
"Huang",
"Zehuan",
""
],
[
"Xu",
"Xudong",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Peng",
"Sida",
""
],
[
"Dai",
"Bo",
""
]
] | This paper addresses the task of 3D clothed human generation from textural descriptions. Previous works usually encode the human body and clothes as a holistic model and generate the whole model in a single-stage optimization, which makes them struggle for clothing editing and meanwhile lose fine-grained control over the whole generation process. To solve this, we propose a layer-wise clothed human representation combined with a progressive optimization strategy, which produces clothing-disentangled 3D human models while providing control capacity for the generation process. The basic idea is progressively generating a minimal-clothed human body and layer-wise clothes. During clothing generation, a novel stratified compositional rendering method is proposed to fuse multi-layer human models, and a new loss function is utilized to help decouple the clothing model from the human body. The proposed method achieves high-quality disentanglement, which thereby provides an effective way for 3D garment generation. Extensive experiments demonstrate that our approach achieves state-of-the-art 3D clothed human generation while also supporting cloth editing applications such as virtual try-on. Project page: http://jtdong.com/tela_layer/ |
1401.3582 | Xiaomin Bao | Xiaomin Bao | The equivalent identities of the MacWilliams identity for linear codes | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use derivatives to prove the equivalences between MacWilliams identity and
its four equivalent forms, and present new interpretations for the four
equivalent forms. Our results explicitly give out the relationships between
MacWilliams identity and its four equivalent forms.
| [
{
"created": "Mon, 23 Dec 2013 13:19:42 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Feb 2014 07:47:47 GMT",
"version": "v2"
}
] | 2014-02-11 | [
[
"Bao",
"Xiaomin",
""
]
] | We use derivatives to prove the equivalences between MacWilliams identity and its four equivalent forms, and present new interpretations for the four equivalent forms. Our results explicitly give out the relationships between MacWilliams identity and its four equivalent forms. |
1603.08878 | Alexander Barg | Itzhak Tamo, Alexander Barg, Sreechakra Goparaju, and Robert
Calderbank | Cyclic LRC Codes, binary LRC codes, and upper bounds on the distance of
cyclic codes | 12pp., submitted for publication. An extended abstract of this
submission was posted earlier as arXiv:1502.01414 and was published in
Proceedings of the 2015 IEEE International Symposium on Information Theory,
Hong Kong, China, June 14-19, 2015, pp. 1262--1266 | International Journal of Information and Coding Theory, vol. 3,
no. 4, pp.345-364 (2016) | 10.1504/IJICOT.2016.079496 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider linear cyclic codes with the locality property, or locally
recoverable codes (LRC codes). A family of LRC codes that generalize the
classical construction of Reed-Solomon codes was constructed in a recent paper
by I. Tamo and A. Barg (IEEE Trans. Inform. Theory, no. 8, 2014). In this paper
we focus on optimal cyclic codes that arise from this construction. We give a
characterization of these codes in terms of their zeros, and observe that there
are many equivalent ways of constructing optimal cyclic LRC codes over a given
field. We also study subfield subcodes of cyclic LRC codes (BCH-like LRC codes)
and establish several results about their locality and minimum distance. The
locality parameter of a cyclic code is related to the dual distance of this
code, and we phrase our results in terms of upper bounds on the dual distance.
| [
{
"created": "Tue, 29 Mar 2016 18:41:24 GMT",
"version": "v1"
}
] | 2017-02-10 | [
[
"Tamo",
"Itzhak",
""
],
[
"Barg",
"Alexander",
""
],
[
"Goparaju",
"Sreechakra",
""
],
[
"Calderbank",
"Robert",
""
]
] | We consider linear cyclic codes with the locality property, or locally recoverable codes (LRC codes). A family of LRC codes that generalize the classical construction of Reed-Solomon codes was constructed in a recent paper by I. Tamo and A. Barg (IEEE Trans. Inform. Theory, no. 8, 2014). In this paper we focus on optimal cyclic codes that arise from this construction. We give a characterization of these codes in terms of their zeros, and observe that there are many equivalent ways of constructing optimal cyclic LRC codes over a given field. We also study subfield subcodes of cyclic LRC codes (BCH-like LRC codes) and establish several results about their locality and minimum distance. The locality parameter of a cyclic code is related to the dual distance of this code, and we phrase our results in terms of upper bounds on the dual distance. |
2309.14971 | Matteo Pagin | Manishika Rawat, Matteo Pagin, Marco Giordani, Louis-Adrien Dufrene,
Quentin Lampin, Michele Zorzi | Minimizing Energy Consumption for 5G NR Beam Management for RedCap
Devices | null | null | null | null | cs.NI eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In 5G New Radio (NR), beam management entails periodic and continuous
transmission and reception of control signals in the form of synchronization
signal blocks (SSBs), used to perform initial access and/or channel estimation.
However, this procedure demands continuous energy consumption, which is
particularly challenging to handle for low-cost, low-complexity, and
battery-constrained devices, such as RedCap devices to support mid-market
Internet of Things (IoT) use cases. In this context, this work aims at reducing
the energy consumption during beam management for RedCap devices, while
ensuring that the desired Quality of Service (QoS) requirements are met. To do
so, we formalize an optimization problem in an Indoor Factory (InF) scenario to
select the best beam management parameters, including the beam update
periodicity and the beamwidth, to minimize energy consumption based on users'
distribution and their speed. The analysis yields the regions of feasibility,
i.e., the upper limit(s) on the beam management parameters for RedCap devices,
that we use to provide design guidelines accordingly.
| [
{
"created": "Tue, 26 Sep 2023 14:44:08 GMT",
"version": "v1"
}
] | 2023-09-27 | [
[
"Rawat",
"Manishika",
""
],
[
"Pagin",
"Matteo",
""
],
[
"Giordani",
"Marco",
""
],
[
"Dufrene",
"Louis-Adrien",
""
],
[
"Lampin",
"Quentin",
""
],
[
"Zorzi",
"Michele",
""
]
] | In 5G New Radio (NR), beam management entails periodic and continuous transmission and reception of control signals in the form of synchronization signal blocks (SSBs), used to perform initial access and/or channel estimation. However, this procedure demands continuous energy consumption, which is particularly challenging to handle for low-cost, low-complexity, and battery-constrained devices, such as RedCap devices to support mid-market Internet of Things (IoT) use cases. In this context, this work aims at reducing the energy consumption during beam management for RedCap devices, while ensuring that the desired Quality of Service (QoS) requirements are met. To do so, we formalize an optimization problem in an Indoor Factory (InF) scenario to select the best beam management parameters, including the beam update periodicity and the beamwidth, to minimize energy consumption based on users' distribution and their speed. The analysis yields the regions of feasibility, i.e., the upper limit(s) on the beam management parameters for RedCap devices, that we use to provide design guidelines accordingly. |
1903.06965 | Ahmet Serkan Karata\c{s} | Ahmet Serkan Karata\c{s} | Feather: A Feature Model Transformation Language | 29 pages, supplementary material published at
https://github.com/askaratas/Feather | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature modeling has been a very popular approach for variability management
in software product lines. Building a feature model requires substantial domain
expertise, however, even experts cannot foresee all future possibilities.
Changing requirements can force a feature model to evolve in order to adapt to
the new conditions. Feather is a language to describe model transformations
that will evolve a feature model. This article presents the structure and
foundations of Feather. First, the language elements, which consist of
declarations to characterize the model to evolve and commands to manipulate its
structure, are introduced. Then, semantics grounding in feature model
properties are given for the commands in order to provide precise command
definitions. Next, an interpreter that can realize the transformations
described by the commands in a Feather script is presented. Finally,
effectiveness of the language is discussed using two realistic examples, where
one of the examples includes a system from a dynamic environment and the other
employs a system that has a large feature model containing 1,227 features.
| [
{
"created": "Sat, 16 Mar 2019 18:08:37 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Karataş",
"Ahmet Serkan",
""
]
] | Feature modeling has been a very popular approach for variability management in software product lines. Building a feature model requires substantial domain expertise, however, even experts cannot foresee all future possibilities. Changing requirements can force a feature model to evolve in order to adapt to the new conditions. Feather is a language to describe model transformations that will evolve a feature model. This article presents the structure and foundations of Feather. First, the language elements, which consist of declarations to characterize the model to evolve and commands to manipulate its structure, are introduced. Then, semantics grounding in feature model properties are given for the commands in order to provide precise command definitions. Next, an interpreter that can realize the transformations described by the commands in a Feather script is presented. Finally, effectiveness of the language is discussed using two realistic examples, where one of the examples includes a system from a dynamic environment and the other employs a system that has a large feature model containing 1,227 features. |
1309.6818 | Jakramate Bootkrajang | Jakramate Bootkrajang, Ata Kaban | Boosting in the presence of label noise | Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013) | null | null | UAI-P-2013-PG-82-91 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boosting is known to be sensitive to label noise. We studied two approaches
to improve AdaBoost's robustness against labelling errors. One is to employ a
label-noise robust classifier as a base learner, while the other is to modify
the AdaBoost algorithm to be more robust. Empirical evaluation shows that a
committee of robust classifiers, although converges faster than non label-noise
aware AdaBoost, is still susceptible to label noise. However, pairing it with
the new robust Boosting algorithm we propose here results in a more resilient
algorithm under mislabelling.
| [
{
"created": "Thu, 26 Sep 2013 12:35:03 GMT",
"version": "v1"
}
] | 2013-09-27 | [
[
"Bootkrajang",
"Jakramate",
""
],
[
"Kaban",
"Ata",
""
]
] | Boosting is known to be sensitive to label noise. We studied two approaches to improve AdaBoost's robustness against labelling errors. One is to employ a label-noise robust classifier as a base learner, while the other is to modify the AdaBoost algorithm to be more robust. Empirical evaluation shows that a committee of robust classifiers, although converges faster than non label-noise aware AdaBoost, is still susceptible to label noise. However, pairing it with the new robust Boosting algorithm we propose here results in a more resilient algorithm under mislabelling. |
1711.06851 | Alberto Perez Veiga | Alberto Perez Veiga | Project Success in Agile Development Projects | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper explains and clarifies the differences between Waterfall and Agile
development methodologies, establishes what criteria could be taken into
account to properly define project success within the scope of software
development projects, and finally tries to clarify if project success is the
reason why many organizations are moving to Agile methodologies from other ones
such as Waterfall. In the form of a literature review, it analyses several,
publications, investigations and case studies that point out the motives why
companies moved to Agile, as well as the results they observed afterward. It
also analyses overall statistics of project outcomes after companies evolved
from traditional methodologies such as Waterfall to Agile development
approaches.
| [
{
"created": "Sat, 18 Nov 2017 12:14:35 GMT",
"version": "v1"
}
] | 2017-11-21 | [
[
"Veiga",
"Alberto Perez",
""
]
] | The paper explains and clarifies the differences between Waterfall and Agile development methodologies, establishes what criteria could be taken into account to properly define project success within the scope of software development projects, and finally tries to clarify if project success is the reason why many organizations are moving to Agile methodologies from other ones such as Waterfall. In the form of a literature review, it analyses several, publications, investigations and case studies that point out the motives why companies moved to Agile, as well as the results they observed afterward. It also analyses overall statistics of project outcomes after companies evolved from traditional methodologies such as Waterfall to Agile development approaches. |
2404.08370 | Svyatoslav Gryaznov | Svyatoslav Gryaznov, Sergei Ovcharov, Artur Riazanov | Resolution Over Linear Equations: Combinatorial Games for Tree-like Size
and Space | null | null | 10.1145/3675415 | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the proof system Res($\oplus$) introduced by Itsykson and Sokolov
(Ann. Pure Appl. Log.'20), which is an extension of the resolution proof system
and operates with disjunctions of linear equations over $\mathbb{F}_2$.
We study characterizations of tree-like size and space of Res($\oplus$)
refutations using combinatorial games. Namely, we introduce a class of
extensible formulas and prove tree-like size lower bounds on it using
Prover-Delayer games, as well as space lower bounds. This class is of
particular interest since it contains many classical combinatorial principles,
including the pigeonhole, ordering, and dense linear ordering principles.
Furthermore, we present the width-space relation for Res($\oplus$)
generalizing the results by Atserias and Dalmau (J. Comput. Syst. Sci.'08) and
their variant of Spoiler-Duplicator games.
| [
{
"created": "Fri, 12 Apr 2024 10:13:18 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jul 2024 10:45:35 GMT",
"version": "v2"
}
] | 2024-07-11 | [
[
"Gryaznov",
"Svyatoslav",
""
],
[
"Ovcharov",
"Sergei",
""
],
[
"Riazanov",
"Artur",
""
]
] | We consider the proof system Res($\oplus$) introduced by Itsykson and Sokolov (Ann. Pure Appl. Log.'20), which is an extension of the resolution proof system and operates with disjunctions of linear equations over $\mathbb{F}_2$. We study characterizations of tree-like size and space of Res($\oplus$) refutations using combinatorial games. Namely, we introduce a class of extensible formulas and prove tree-like size lower bounds on it using Prover-Delayer games, as well as space lower bounds. This class is of particular interest since it contains many classical combinatorial principles, including the pigeonhole, ordering, and dense linear ordering principles. Furthermore, we present the width-space relation for Res($\oplus$) generalizing the results by Atserias and Dalmau (J. Comput. Syst. Sci.'08) and their variant of Spoiler-Duplicator games. |
2010.07565 | Junfu Wang | Junfu Wang, Yunhong Wang, Zhen Yang, Liang Yang, Yuanfang Guo | Bi-GCN: Binary Graph Convolutional Network | Accepted by CVPR 2021 as oral presentation | null | 10.1109/CVPR46437.2021.00161 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have achieved tremendous success in graph
representation learning. Unfortunately, current GNNs usually rely on loading
the entire attributed graph into network for processing. This implicit
assumption may not be satisfied with limited memory resources, especially when
the attributed graph is large. In this paper, we pioneer to propose a Binary
Graph Convolutional Network (Bi-GCN), which binarizes both the network
parameters and input node features. Besides, the original matrix
multiplications are revised to binary operations for accelerations. According
to the theoretical analysis, our Bi-GCN can reduce the memory consumption by an
average of ~30x for both the network parameters and input data, and accelerate
the inference speed by an average of ~47x, on the citation networks. Meanwhile,
we also design a new gradient approximation based back-propagation method to
train our Bi-GCN well. Extensive experiments have demonstrated that our Bi-GCN
can give a comparable performance compared to the full-precision baselines.
Besides, our binarization approach can be easily applied to other GNNs, which
has been verified in the experiments.
| [
{
"created": "Thu, 15 Oct 2020 07:26:23 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Apr 2021 12:51:30 GMT",
"version": "v2"
}
] | 2022-02-15 | [
[
"Wang",
"Junfu",
""
],
[
"Wang",
"Yunhong",
""
],
[
"Yang",
"Zhen",
""
],
[
"Yang",
"Liang",
""
],
[
"Guo",
"Yuanfang",
""
]
] | Graph Neural Networks (GNNs) have achieved tremendous success in graph representation learning. Unfortunately, current GNNs usually rely on loading the entire attributed graph into network for processing. This implicit assumption may not be satisfied with limited memory resources, especially when the attributed graph is large. In this paper, we pioneer to propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features. Besides, the original matrix multiplications are revised to binary operations for accelerations. According to the theoretical analysis, our Bi-GCN can reduce the memory consumption by an average of ~30x for both the network parameters and input data, and accelerate the inference speed by an average of ~47x, on the citation networks. Meanwhile, we also design a new gradient approximation based back-propagation method to train our Bi-GCN well. Extensive experiments have demonstrated that our Bi-GCN can give a comparable performance compared to the full-precision baselines. Besides, our binarization approach can be easily applied to other GNNs, which has been verified in the experiments. |
1806.04391 | Kai Hu | Xiaoteng Zhang, Yixin Bao, Feiyun Zhang, Kai Hu, Yicheng Wang, Liang
Zhu, Qinzhu He, Yining Lin, Jie Shao and Yao Peng | Qiniu Submission to ActivityNet Challenge 2018 | 4 pages, 3 figures, CVPR workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce our submissions for the tasks of trimmed activity
recognition (Kinetics) and trimmed event recognition (Moments in Time) for
Activitynet Challenge 2018. In the two tasks, non-local neural networks and
temporal segment networks are implemented as our base models. Multi-modal cues
such as RGB image, optical flow and acoustic signal have also been used in our
method. We also propose new non-local-based models for further improvement on
the recognition accuracy. The final submissions after ensembling the models
achieve 83.5% top-1 accuracy and 96.8% top-5 accuracy on the Kinetics
validation set, 35.81% top-1 accuracy and 62.59% top-5 accuracy on the MIT
validation set.
| [
{
"created": "Tue, 12 Jun 2018 08:42:55 GMT",
"version": "v1"
}
] | 2018-06-13 | [
[
"Zhang",
"Xiaoteng",
""
],
[
"Bao",
"Yixin",
""
],
[
"Zhang",
"Feiyun",
""
],
[
"Hu",
"Kai",
""
],
[
"Wang",
"Yicheng",
""
],
[
"Zhu",
"Liang",
""
],
[
"He",
"Qinzhu",
""
],
[
"Lin",
"Yining",
""
],
[
"Shao",
"Jie",
""
],
[
"Peng",
"Yao",
""
]
] | In this paper, we introduce our submissions for the tasks of trimmed activity recognition (Kinetics) and trimmed event recognition (Moments in Time) for Activitynet Challenge 2018. In the two tasks, non-local neural networks and temporal segment networks are implemented as our base models. Multi-modal cues such as RGB image, optical flow and acoustic signal have also been used in our method. We also propose new non-local-based models for further improvement on the recognition accuracy. The final submissions after ensembling the models achieve 83.5% top-1 accuracy and 96.8% top-5 accuracy on the Kinetics validation set, 35.81% top-1 accuracy and 62.59% top-5 accuracy on the MIT validation set. |
2205.11507 | Quanquan Gu | Dongruo Zhou and Quanquan Gu | Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs | 33 pages, 1 table | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have shown that episodic reinforcement learning (RL) is not
more difficult than contextual bandits, even with a long planning horizon and
unknown state transitions. However, these results are limited to either tabular
Markov decision processes (MDPs) or computationally inefficient algorithms for
linear mixture MDPs. In this paper, we propose the first computationally
efficient horizon-free algorithm for linear mixture MDPs, which achieves the
optimal $\tilde O(d\sqrt{K} +d^2)$ regret up to logarithmic factors. Our
algorithm adapts a weighted least square estimator for the unknown transitional
dynamic, where the weight is both \emph{variance-aware} and
\emph{uncertainty-aware}. When applying our weighted least square estimator to
heterogeneous linear bandits, we can obtain an $\tilde O(d\sqrt{\sum_{k=1}^K
\sigma_k^2} +d)$ regret in the first $K$ rounds, where $d$ is the dimension of
the context and $\sigma_k^2$ is the variance of the reward in the $k$-th round.
This also improves upon the best-known algorithms in this setting when
$\sigma_k^2$'s are known.
| [
{
"created": "Mon, 23 May 2022 17:59:18 GMT",
"version": "v1"
}
] | 2022-05-24 | [
[
"Zhou",
"Dongruo",
""
],
[
"Gu",
"Quanquan",
""
]
] | Recent studies have shown that episodic reinforcement learning (RL) is not more difficult than contextual bandits, even with a long planning horizon and unknown state transitions. However, these results are limited to either tabular Markov decision processes (MDPs) or computationally inefficient algorithms for linear mixture MDPs. In this paper, we propose the first computationally efficient horizon-free algorithm for linear mixture MDPs, which achieves the optimal $\tilde O(d\sqrt{K} +d^2)$ regret up to logarithmic factors. Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic, where the weight is both \emph{variance-aware} and \emph{uncertainty-aware}. When applying our weighted least square estimator to heterogeneous linear bandits, we can obtain an $\tilde O(d\sqrt{\sum_{k=1}^K \sigma_k^2} +d)$ regret in the first $K$ rounds, where $d$ is the dimension of the context and $\sigma_k^2$ is the variance of the reward in the $k$-th round. This also improves upon the best-known algorithms in this setting when $\sigma_k^2$'s are known. |
1001.2860 | Djamal Belazzougui | Djamal Belazzougui | Succinct Dictionary Matching With No Slowdown | Corrected typos and other minor errors | null | 10.1007/978-3-642-13509-5_9 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of dictionary matching is a classical problem in string matching:
given a set S of d strings of total length n characters over an (not
necessarily constant) alphabet of size sigma, build a data structure so that we
can match in a any text T all occurrences of strings belonging to S. The
classical solution for this problem is the Aho-Corasick automaton which finds
all occ occurrences in a text T in time O(|T| + occ) using a data structure
that occupies O(m log m) bits of space where m <= n + 1 is the number of states
in the automaton. In this paper we show that the Aho-Corasick automaton can be
represented in just m(log sigma + O(1)) + O(d log(n/d)) bits of space while
still maintaining the ability to answer to queries in O(|T| + occ) time. To the
best of our knowledge, the currently fastest succinct data structure for the
dictionary matching problem uses space O(n log sigma) while answering queries
in O(|T|log log n + occ) time. In this paper we also show how the space
occupancy can be reduced to m(H0 + O(1)) + O(d log(n/d)) where H0 is the
empirical entropy of the characters appearing in the trie representation of the
set S, provided that sigma < m^epsilon for any constant 0 < epsilon < 1. The
query time remains unchanged.
| [
{
"created": "Sat, 16 Jan 2010 22:10:57 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Feb 2010 21:06:23 GMT",
"version": "v2"
}
] | 2015-05-18 | [
[
"Belazzougui",
"Djamal",
""
]
] | The problem of dictionary matching is a classical problem in string matching: given a set S of d strings of total length n characters over an (not necessarily constant) alphabet of size sigma, build a data structure so that we can match in a any text T all occurrences of strings belonging to S. The classical solution for this problem is the Aho-Corasick automaton which finds all occ occurrences in a text T in time O(|T| + occ) using a data structure that occupies O(m log m) bits of space where m <= n + 1 is the number of states in the automaton. In this paper we show that the Aho-Corasick automaton can be represented in just m(log sigma + O(1)) + O(d log(n/d)) bits of space while still maintaining the ability to answer to queries in O(|T| + occ) time. To the best of our knowledge, the currently fastest succinct data structure for the dictionary matching problem uses space O(n log sigma) while answering queries in O(|T|log log n + occ) time. In this paper we also show how the space occupancy can be reduced to m(H0 + O(1)) + O(d log(n/d)) where H0 is the empirical entropy of the characters appearing in the trie representation of the set S, provided that sigma < m^epsilon for any constant 0 < epsilon < 1. The query time remains unchanged. |
1206.6356 | Ameya Agaskar | Ameya Agaskar and Yue M. Lu | A Spectral Graph Uncertainty Principle | 40 pages, 8 figures | IEEE Trans. Info. Theory 59 (2013) 4338-4356 | 10.1109/TIT.2013.2252233 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spectral theory of graphs provides a bridge between classical signal
processing and the nascent field of graph signal processing. In this paper, a
spectral graph analogy to Heisenberg's celebrated uncertainty principle is
developed. Just as the classical result provides a tradeoff between signal
localization in time and frequency, this result provides a fundamental tradeoff
between a signal's localization on a graph and in its spectral domain. Using
the eigenvectors of the graph Laplacian as a surrogate Fourier basis,
quantitative definitions of graph and spectral "spreads" are given, and a
complete characterization of the feasibility region of these two quantities is
developed. In particular, the lower boundary of the region, referred to as the
uncertainty curve, is shown to be achieved by eigenvectors associated with the
smallest eigenvalues of an affine family of matrices. The convexity of the
uncertainty curve allows it to be found to within $\varepsilon$ by a fast
approximation algorithm requiring $O(\varepsilon^{-1/2})$ typically sparse
eigenvalue evaluations. Closed-form expressions for the uncertainty curves for
some special classes of graphs are derived, and an accurate analytical
approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random
graphs is developed. These theoretical results are validated by numerical
experiments, which also reveal an intriguing connection between diffusion
processes on graphs and the uncertainty bounds.
| [
{
"created": "Wed, 27 Jun 2012 18:10:56 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2013 16:06:51 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Aug 2013 18:18:44 GMT",
"version": "v3"
}
] | 2013-08-02 | [
[
"Agaskar",
"Ameya",
""
],
[
"Lu",
"Yue M.",
""
]
] | The spectral theory of graphs provides a bridge between classical signal processing and the nascent field of graph signal processing. In this paper, a spectral graph analogy to Heisenberg's celebrated uncertainty principle is developed. Just as the classical result provides a tradeoff between signal localization in time and frequency, this result provides a fundamental tradeoff between a signal's localization on a graph and in its spectral domain. Using the eigenvectors of the graph Laplacian as a surrogate Fourier basis, quantitative definitions of graph and spectral "spreads" are given, and a complete characterization of the feasibility region of these two quantities is developed. In particular, the lower boundary of the region, referred to as the uncertainty curve, is shown to be achieved by eigenvectors associated with the smallest eigenvalues of an affine family of matrices. The convexity of the uncertainty curve allows it to be found to within $\varepsilon$ by a fast approximation algorithm requiring $O(\varepsilon^{-1/2})$ typically sparse eigenvalue evaluations. Closed-form expressions for the uncertainty curves for some special classes of graphs are derived, and an accurate analytical approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random graphs is developed. These theoretical results are validated by numerical experiments, which also reveal an intriguing connection between diffusion processes on graphs and the uncertainty bounds. |
2203.16859 | Se-Hang Cheong | Se-Hang Cheong, Yain-Whar Si | Boundary Node Detection and Unfolding of Complex Non-Convex Ad Hoc
Networks | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex non-convex ad hoc networks (CNCAH) contain intersecting polygons and
edges. In many instances, the layouts of these networks are not entirely convex
in shape. In this article, we propose a Kamada-Kawai-based algorithm called
W-KK-MS for boundary node detection problems, which is capable of aligning node
positions while achieving high sensitivity, specificity, and accuracy in
producing a visual drawing from the input network topology. The algorithm put
forward in this article selects and assigns weights to top-k nodes in each
iteration to speed up the updating process of nodes. We also propose a novel
approach to detect and unfold stacked regions in CNCAH networks. Experimental
results show that the proposed algorithms can achieve fast convergence on
boundary node detection in CNCAH networks and are able to successfully unfold
stacked regions. The design and implementation of a prototype system called
ELnet for analyzing CNCAH networks is also described in this article. The ELnet
system is capable of generating synthetic networks for testing, integrating
with force-directed algorithms, and visualizing and analyzing algorithms'
outcomes.
| [
{
"created": "Thu, 31 Mar 2022 07:41:57 GMT",
"version": "v1"
}
] | 2022-04-01 | [
[
"Cheong",
"Se-Hang",
""
],
[
"Si",
"Yain-Whar",
""
]
] | Complex non-convex ad hoc networks (CNCAH) contain intersecting polygons and edges. In many instances, the layouts of these networks are not entirely convex in shape. In this article, we propose a Kamada-Kawai-based algorithm called W-KK-MS for boundary node detection problems, which is capable of aligning node positions while achieving high sensitivity, specificity, and accuracy in producing a visual drawing from the input network topology. The algorithm put forward in this article selects and assigns weights to top-k nodes in each iteration to speed up the updating process of nodes. We also propose a novel approach to detect and unfold stacked regions in CNCAH networks. Experimental results show that the proposed algorithms can achieve fast convergence on boundary node detection in CNCAH networks and are able to successfully unfold stacked regions. The design and implementation of a prototype system called ELnet for analyzing CNCAH networks is also described in this article. The ELnet system is capable of generating synthetic networks for testing, integrating with force-directed algorithms, and visualizing and analyzing algorithms' outcomes. |
1704.06358 | Paul Tupper | Benjamin Goodman, Paul Tupper | Stability and Fluctuations in a Simple Model of Phonetic Category Change | 19 pages | null | null | null | cs.CL math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In spoken languages, speakers divide up the space of phonetic possibilities
into different regions, corresponding to different phonemes. We consider a
simple exemplar model of how this division of phonetic space varies over time
among a population of language users. In the particular model we consider, we
show that, once the system is initialized with a given set of phonemes, that
phonemes do not become extinct: all phonemes will be maintained in the system
for all time. This is in contrast to what is observed in more complex models.
Furthermore, we show that the boundaries between phonemes fluctuate and we
quantitatively study the fluctuations in a simple instance of our model. These
results prepare the ground for more sophisticated models in which some phonemes
go extinct or new phonemes emerge through other processes.
| [
{
"created": "Thu, 20 Apr 2017 22:28:14 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Dec 2017 04:46:08 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Jun 2018 00:23:20 GMT",
"version": "v3"
}
] | 2018-07-02 | [
[
"Goodman",
"Benjamin",
""
],
[
"Tupper",
"Paul",
""
]
] | In spoken languages, speakers divide up the space of phonetic possibilities into different regions, corresponding to different phonemes. We consider a simple exemplar model of how this division of phonetic space varies over time among a population of language users. In the particular model we consider, we show that, once the system is initialized with a given set of phonemes, that phonemes do not become extinct: all phonemes will be maintained in the system for all time. This is in contrast to what is observed in more complex models. Furthermore, we show that the boundaries between phonemes fluctuate and we quantitatively study the fluctuations in a simple instance of our model. These results prepare the ground for more sophisticated models in which some phonemes go extinct or new phonemes emerge through other processes. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.