id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2307.13834
Gabriel Klasson Landin
Gabriel Klasson Landin and Truls Jilborg
Determining the Optimal Frequencies for a Duplicated Randomized Clock SCA Countermeasure
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Side-channel attacks pose significant challenges to the security of embedded systems, often allowing attackers to circumvent encryption algorithms in minutes compared to the trillions of years required for brute-force attacks. To mitigate these vulnerabilities, various countermeasures have been developed. This study focuses on two specific countermeasures: randomization of the encryption algorithm's clock and the incorporation of a dummy core to disguise power traces. The objective of this research is to identify the optimal frequencies that yield the highest level of randomness when these two countermeasures are combined. By investigating the interplay between clock randomization and the presence of dummy cores, we aim to enhance the overall security of embedded systems. The insights gained from this study will contribute to the development of more robust countermeasures against side-channel attacks, bolstering the protection of sensitive information and systems. To achieve this, we conduct simulations and perform side-channel attacks on an FPGA to establish the relationship between frequencies and the resulting protection. We break the encryption on a non-duplicated circuit and note the least amount of measured power traces necessary and the timing overhead. We do this for all sets of frequencies considered which gives a good indication of which sets of frequencies give good protection. By comparing the frequencies generated with those from the duplicated circuit we use similar conclusions to prove whether a frequency set is secure or not. Based on our results we argue that having one frequency lower than half of the base frequency and the other frequencies being close but not higher than the base gives the highest security compared to the timing overhead measured.
[ { "created": "Tue, 25 Jul 2023 22:07:41 GMT", "version": "v1" } ]
2023-07-27
[ [ "Landin", "Gabriel Klasson", "" ], [ "Jilborg", "Truls", "" ] ]
Side-channel attacks pose significant challenges to the security of embedded systems, often allowing attackers to circumvent encryption algorithms in minutes compared to the trillions of years required for brute-force attacks. To mitigate these vulnerabilities, various countermeasures have been developed. This study focuses on two specific countermeasures: randomization of the encryption algorithm's clock and the incorporation of a dummy core to disguise power traces. The objective of this research is to identify the optimal frequencies that yield the highest level of randomness when these two countermeasures are combined. By investigating the interplay between clock randomization and the presence of dummy cores, we aim to enhance the overall security of embedded systems. The insights gained from this study will contribute to the development of more robust countermeasures against side-channel attacks, bolstering the protection of sensitive information and systems. To achieve this, we conduct simulations and perform side-channel attacks on an FPGA to establish the relationship between frequencies and the resulting protection. We break the encryption on a non-duplicated circuit and note the least amount of measured power traces necessary and the timing overhead. We do this for all sets of frequencies considered which gives a good indication of which sets of frequencies give good protection. By comparing the frequencies generated with those from the duplicated circuit we use similar conclusions to prove whether a frequency set is secure or not. Based on our results we argue that having one frequency lower than half of the base frequency and the other frequencies being close but not higher than the base gives the highest security compared to the timing overhead measured.
2405.08345
Kun Song
Kun Song, Gaoming Chen, Wenhang Liu, Zhenhua Xiong
Multi-Robot Rendezvous in Unknown Environment with Limited Communication
Submit to RAL. 8 pages, 6 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rendezvous aims at gathering all robots at a specific location, which is an important collaborative behavior for multirobot systems. However, in an unknown environment, it is challenging to achieve rendezvous. Previous researches mainly focus on special scenarios where communication is not allowed and each robot executes a random searching strategy, which is highly time-consuming, especially in large-scale environments. In this work, we focus on rendezvous in unknown environments where communication is available. We divide this task into two steps: rendezvous based environment exploration with relative pose (RP) estimation and rendezvous point election. A new strategy called partitioned and incomplete exploration for rendezvous (PIER) is proposed to efficiently explore the unknown environment, where lightweight topological maps are constructed and shared among robots for RP estimation with very few communications. Then, a rendezvous point selection algorithm based on the merged topological map is proposed for efficient rendezvous for multi-robot systems. The effectiveness of the proposed methods is validated in both simulations and real-world experiments.
[ { "created": "Tue, 14 May 2024 06:33:56 GMT", "version": "v1" } ]
2024-05-15
[ [ "Song", "Kun", "" ], [ "Chen", "Gaoming", "" ], [ "Liu", "Wenhang", "" ], [ "Xiong", "Zhenhua", "" ] ]
Rendezvous aims at gathering all robots at a specific location, which is an important collaborative behavior for multirobot systems. However, in an unknown environment, it is challenging to achieve rendezvous. Previous researches mainly focus on special scenarios where communication is not allowed and each robot executes a random searching strategy, which is highly time-consuming, especially in large-scale environments. In this work, we focus on rendezvous in unknown environments where communication is available. We divide this task into two steps: rendezvous based environment exploration with relative pose (RP) estimation and rendezvous point election. A new strategy called partitioned and incomplete exploration for rendezvous (PIER) is proposed to efficiently explore the unknown environment, where lightweight topological maps are constructed and shared among robots for RP estimation with very few communications. Then, a rendezvous point selection algorithm based on the merged topological map is proposed for efficient rendezvous for multi-robot systems. The effectiveness of the proposed methods is validated in both simulations and real-world experiments.
1501.01495
Tobias Fehenberger
Tobias Fehenberger, Alex Alvarado, Polina Bayvel, Norbert Hanik
On Achievable Rates for Long-Haul Fiber-Optic Communications
Hard decision mutual information analysis added, two typos corrected
null
10.1364/OE.23.009183
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lower bounds on mutual information (MI) of long-haul optical fiber systems for hard-decision and soft-decision decoding are studied. Ready-to-use expressions to calculate the MI are presented. Extensive numerical simulations are used to quantify how changes in the optical transmitter, receiver, and channel affect the achievable transmission rates of the system. Special emphasis is put to the use of different quadrature amplitude modulation formats, channel spacings, digital back-propagation schemes and probabilistic shaping. The advantages of using MI over the prevailing $Q$-factor as a figure of merit of coded optical systems are also highlighted.
[ { "created": "Wed, 7 Jan 2015 13:57:07 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2015 04:36:38 GMT", "version": "v2" } ]
2015-04-06
[ [ "Fehenberger", "Tobias", "" ], [ "Alvarado", "Alex", "" ], [ "Bayvel", "Polina", "" ], [ "Hanik", "Norbert", "" ] ]
Lower bounds on mutual information (MI) of long-haul optical fiber systems for hard-decision and soft-decision decoding are studied. Ready-to-use expressions to calculate the MI are presented. Extensive numerical simulations are used to quantify how changes in the optical transmitter, receiver, and channel affect the achievable transmission rates of the system. Special emphasis is put to the use of different quadrature amplitude modulation formats, channel spacings, digital back-propagation schemes and probabilistic shaping. The advantages of using MI over the prevailing $Q$-factor as a figure of merit of coded optical systems are also highlighted.
2007.06653
Alex Berke
Alex Berke, Jason Nawyn, Thomas Sanchez Lengeling, Kent Larson
Urban Mobility Swarms: A Scalable Implementation
null
null
10.1007/978-3-030-52246-9_1
null
cs.MA cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a system to coordinate 'urban mobility swarms' in order to promote the use and safety of lightweight, sustainable transit, while enhancing the vibrancy and community fabric of cities. This work draws from behavior exhibited by swarms of nocturnal insects, such as crickets and fireflies, whereby synchrony unifies individuals in a decentralized network. Coordination naturally emerges in these cases and provides a compelling demonstration of 'strength in numbers'. Our work is applied to coordinating lightweight vehicles, such as bicycles, which are automatically inducted into ad-hoc 'swarms', united by the synchronous pulsation of light. We model individual riders as nodes in a decentralized network and synchronize their behavior via a peer-to-peer message protocol and algorithm, which preserves individual privacy. Nodes broadcast over radio with a transmission range tuned to localize swarm membership. Nodes then join or disconnect from others based on proximity, accommodating the dynamically changing topology of urban mobility networks. This paper provides a technical description of our system, including the protocol and algorithm to coordinate the swarming behavior that emerges from it. We also demonstrate its implementation in code, circuity, and hardware, with a system prototype tested on a city bike-share. In doing so, we evince the scalability of our system. Our prototype uses low-cost components, and bike-share programs, which manage bicycle fleets distributed across cities, could deploy the system at city-scale. Our flexible, decentralized design allows additional bikes to then connect with the network, enhancing its scale and impact.
[ { "created": "Mon, 13 Jul 2020 19:44:16 GMT", "version": "v1" } ]
2020-07-15
[ [ "Berke", "Alex", "" ], [ "Nawyn", "Jason", "" ], [ "Lengeling", "Thomas Sanchez", "" ], [ "Larson", "Kent", "" ] ]
We present a system to coordinate 'urban mobility swarms' in order to promote the use and safety of lightweight, sustainable transit, while enhancing the vibrancy and community fabric of cities. This work draws from behavior exhibited by swarms of nocturnal insects, such as crickets and fireflies, whereby synchrony unifies individuals in a decentralized network. Coordination naturally emerges in these cases and provides a compelling demonstration of 'strength in numbers'. Our work is applied to coordinating lightweight vehicles, such as bicycles, which are automatically inducted into ad-hoc 'swarms', united by the synchronous pulsation of light. We model individual riders as nodes in a decentralized network and synchronize their behavior via a peer-to-peer message protocol and algorithm, which preserves individual privacy. Nodes broadcast over radio with a transmission range tuned to localize swarm membership. Nodes then join or disconnect from others based on proximity, accommodating the dynamically changing topology of urban mobility networks. This paper provides a technical description of our system, including the protocol and algorithm to coordinate the swarming behavior that emerges from it. We also demonstrate its implementation in code, circuity, and hardware, with a system prototype tested on a city bike-share. In doing so, we evince the scalability of our system. Our prototype uses low-cost components, and bike-share programs, which manage bicycle fleets distributed across cities, could deploy the system at city-scale. Our flexible, decentralized design allows additional bikes to then connect with the network, enhancing its scale and impact.
2207.05624
Ali Munir
Sepehr Abbasi, Shiva Ketabi, Ali Munir, Mahmoud Bahnasy, Yashar Ganjali
DWTCP: Ultra Low Latency Congestion Control Protocol for Data Centers
19 pages, 17 figures
null
null
null
cs.NI cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Congestion control algorithms rely on a variety of congestion signals (packet loss, Explicit Congestion Notification, delay, etc.) to achieve fast convergence, high utilization, and fairness among flows. A key limitation of these congestion signals is that they are either late in feedback or they incur significant overheads. An ideal congestion control must discover any available bandwidth in the network, detect congestion as soon as link utilization approaches full capacity, and react timely to avoid queuing and packet drops, without significant overheads. To this end, this work proposes Scout service that leverages priority queues to infer bandwidth availability and link busyness at the host. The key observation here is that as the high priority queue (HPQ) gets busier, the low priority queue (LPQ) is served less. Therefore, the state of the link can be observed from the LPQ and any congestion can be detected several RTTs earlier than observing the HPQ. We propose a new transport protocol, Double-Window Transmission Control Protocol (DWTCP) that builds upon the Scout service to dynamically adjust its congestion window. Our testbed and simulation-based evaluation demonstrates that Scout enables a data center transport to achieve high throughput, near-zero queues, lower latency, and high fairness.
[ { "created": "Tue, 12 Jul 2022 15:46:19 GMT", "version": "v1" } ]
2022-07-13
[ [ "Abbasi", "Sepehr", "" ], [ "Ketabi", "Shiva", "" ], [ "Munir", "Ali", "" ], [ "Bahnasy", "Mahmoud", "" ], [ "Ganjali", "Yashar", "" ] ]
Congestion control algorithms rely on a variety of congestion signals (packet loss, Explicit Congestion Notification, delay, etc.) to achieve fast convergence, high utilization, and fairness among flows. A key limitation of these congestion signals is that they are either late in feedback or they incur significant overheads. An ideal congestion control must discover any available bandwidth in the network, detect congestion as soon as link utilization approaches full capacity, and react timely to avoid queuing and packet drops, without significant overheads. To this end, this work proposes Scout service that leverages priority queues to infer bandwidth availability and link busyness at the host. The key observation here is that as the high priority queue (HPQ) gets busier, the low priority queue (LPQ) is served less. Therefore, the state of the link can be observed from the LPQ and any congestion can be detected several RTTs earlier than observing the HPQ. We propose a new transport protocol, Double-Window Transmission Control Protocol (DWTCP) that builds upon the Scout service to dynamically adjust its congestion window. Our testbed and simulation-based evaluation demonstrates that Scout enables a data center transport to achieve high throughput, near-zero queues, lower latency, and high fairness.
1804.07237
Jiamiao Xu
Jiamiao Xu, Shujian Yu, Xinge You, Mengjun Leng, Xiao-Yuan Jing and C. L. Philip Chen
Multi-view Hybrid Embedding: A Divide-and-Conquer Approach
This paper has been accepted by IEEE Transactions on Cybernetics
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel cross-view classification algorithm where the gallery and probe data come from different views. A popular approach to tackle this problem is the multi-view subspace learning (MvSL) that aims to learn a latent subspace shared by multi-view data. Despite promising results obtained on some applications, the performance of existing methods deteriorates dramatically when the multi-view data is sampled from nonlinear manifolds or suffers from heavy outliers. To circumvent this drawback, motivated by the Divide-and-Conquer strategy, we propose Multi-view Hybrid Embedding (MvHE), a unique method of dividing the problem of cross-view classification into three subproblems and building one model for each subproblem. Specifically, the first model is designed to remove view discrepancy, whereas the second and third models attempt to discover the intrinsic nonlinear structure and to increase discriminability in intra-view and inter-view samples respectively. The kernel extension is conducted to further boost the representation power of MvHE. Extensive experiments are conducted on four benchmark datasets. Our methods demonstrate overwhelming advantages against the state-of-the-art MvSL based cross-view classification approaches in terms of classification accuracy and robustness.
[ { "created": "Thu, 19 Apr 2018 15:38:15 GMT", "version": "v1" }, { "created": "Mon, 21 Jan 2019 10:46:35 GMT", "version": "v2" } ]
2019-01-23
[ [ "Xu", "Jiamiao", "" ], [ "Yu", "Shujian", "" ], [ "You", "Xinge", "" ], [ "Leng", "Mengjun", "" ], [ "Jing", "Xiao-Yuan", "" ], [ "Chen", "C. L. Philip", "" ] ]
We present a novel cross-view classification algorithm where the gallery and probe data come from different views. A popular approach to tackle this problem is the multi-view subspace learning (MvSL) that aims to learn a latent subspace shared by multi-view data. Despite promising results obtained on some applications, the performance of existing methods deteriorates dramatically when the multi-view data is sampled from nonlinear manifolds or suffers from heavy outliers. To circumvent this drawback, motivated by the Divide-and-Conquer strategy, we propose Multi-view Hybrid Embedding (MvHE), a unique method of dividing the problem of cross-view classification into three subproblems and building one model for each subproblem. Specifically, the first model is designed to remove view discrepancy, whereas the second and third models attempt to discover the intrinsic nonlinear structure and to increase discriminability in intra-view and inter-view samples respectively. The kernel extension is conducted to further boost the representation power of MvHE. Extensive experiments are conducted on four benchmark datasets. Our methods demonstrate overwhelming advantages against the state-of-the-art MvSL based cross-view classification approaches in terms of classification accuracy and robustness.
2203.16110
Sean Bin Yang
Sean Bin Yang, Chenjuan Guo, Jilin Hu, Bin Yang, Jian Tang, and Christian S. Jensen
Weakly-supervised Temporal Path Representation Learning with Contrastive Curriculum Learning -- Extended Version
This paper has been accepted by IEEE ICDE-22
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In step with the digitalization of transportation, we are witnessing a growing range of path-based smart-city applications, e.g., travel-time estimation and travel path ranking. A temporal path(TP) that includes temporal information, e.g., departure time, into the path is fundamental to enable such applications. In this setting, it is essential to learn generic temporal path representations(TPRs) that consider spatial and temporal correlations simultaneously and that can be used in different applications, i.e., downstream tasks. Existing methods fail to achieve the goal since (i) supervised methods require large amounts of task-specific labels when training and thus fail to generalize the obtained TPRs to other tasks; (ii) through unsupervised methods can learn generic representations, they disregard the temporal aspect, leading to sub-optimal results. To contend with the limitations of existing solutions, we propose a Weakly-Supervised Contrastive (WSC) learning model. We first propose a temporal path encoder that encodes both the spatial and temporal information of a temporal path into a TPR. To train the encoder, we introduce weak labels that are easy and inexpensive to obtain and are relevant to different tasks, e.g., temporal labels indicating peak vs. off-peak hours from departure times. Based on the weak labels, we construct meaningful positive and negative temporal path samples by considering both spatial and temporal information, which facilities training the encoder using contrastive learning by pulling closer to the positive samples' representations while pushing away the negative samples' representations. To better guide contrastive learning, we propose a learning strategy based on Curriculum Learning such that the learning performs from easy to hard training instances. Experiments studies verify the effectiveness of the proposed method.
[ { "created": "Wed, 30 Mar 2022 07:36:20 GMT", "version": "v1" }, { "created": "Fri, 1 Apr 2022 15:37:26 GMT", "version": "v2" }, { "created": "Fri, 15 Apr 2022 15:41:04 GMT", "version": "v3" } ]
2022-04-18
[ [ "Yang", "Sean Bin", "" ], [ "Guo", "Chenjuan", "" ], [ "Hu", "Jilin", "" ], [ "Yang", "Bin", "" ], [ "Tang", "Jian", "" ], [ "Jensen", "Christian S.", "" ] ]
In step with the digitalization of transportation, we are witnessing a growing range of path-based smart-city applications, e.g., travel-time estimation and travel path ranking. A temporal path(TP) that includes temporal information, e.g., departure time, into the path is fundamental to enable such applications. In this setting, it is essential to learn generic temporal path representations(TPRs) that consider spatial and temporal correlations simultaneously and that can be used in different applications, i.e., downstream tasks. Existing methods fail to achieve the goal since (i) supervised methods require large amounts of task-specific labels when training and thus fail to generalize the obtained TPRs to other tasks; (ii) through unsupervised methods can learn generic representations, they disregard the temporal aspect, leading to sub-optimal results. To contend with the limitations of existing solutions, we propose a Weakly-Supervised Contrastive (WSC) learning model. We first propose a temporal path encoder that encodes both the spatial and temporal information of a temporal path into a TPR. To train the encoder, we introduce weak labels that are easy and inexpensive to obtain and are relevant to different tasks, e.g., temporal labels indicating peak vs. off-peak hours from departure times. Based on the weak labels, we construct meaningful positive and negative temporal path samples by considering both spatial and temporal information, which facilities training the encoder using contrastive learning by pulling closer to the positive samples' representations while pushing away the negative samples' representations. To better guide contrastive learning, we propose a learning strategy based on Curriculum Learning such that the learning performs from easy to hard training instances. Experiments studies verify the effectiveness of the proposed method.
1106.3694
Hugo Jim\'enez-P\'erez
Hugo Jim\'enez-P\'erez and Jacques Laskar
A time-parallel algorithm for almost integrable Hamiltonian systems
19 pages, 6 figures
null
null
null
cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a time-parallel algorithm for solving numerically almost integrable Hamiltonian systems in action-angle coordinates. This algorithm is a refinement of that introduced by Saha, Stadel and Tremaine in 1997 (SST97) for the same type of problems. Our refined algorithm has a better convergence obtained from the use of derivatives of the perturbing term not considered in the original SST97 algorithm. An advantage of this algorithm is its independence of the step-size for the parallelized procedures which can be consider as a particular case of the parareal scheme.
[ { "created": "Sat, 18 Jun 2011 22:37:28 GMT", "version": "v1" } ]
2011-06-21
[ [ "Jiménez-Pérez", "Hugo", "" ], [ "Laskar", "Jacques", "" ] ]
We introduce a time-parallel algorithm for solving numerically almost integrable Hamiltonian systems in action-angle coordinates. This algorithm is a refinement of that introduced by Saha, Stadel and Tremaine in 1997 (SST97) for the same type of problems. Our refined algorithm has a better convergence obtained from the use of derivatives of the perturbing term not considered in the original SST97 algorithm. An advantage of this algorithm is its independence of the step-size for the parallelized procedures which can be consider as a particular case of the parareal scheme.
2204.07026
Guilherme Maeda
Guilherme Maeda
Blending Primitive Policies in Shared Control for Assisted Teleoperation
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Movement primitives have the property to accommodate changes in the robot state while maintaining attraction to the original policy. As such, we investigate the use of primitives as a blending mechanism by considering that state deviations from the original policy are caused by user inputs. As the primitive recovers from the user input, it implicitly blends human and robot policies without requiring their weightings -- referred to as arbitration. In this paper, we adopt Dynamical Movement Primitives (DMPs), which allow us to avoid the need for multiple demonstrations, and are fast enough to enable numerous instantiations, one for each hypothesis of the human intent. User studies are presented on assisted teleoperation tasks of reaching multiple goals and dynamic obstacle avoidance. Comparable performance to conventional teleoperation was achieved while significantly decreasing human intervention, often by more than 60%.
[ { "created": "Thu, 14 Apr 2022 15:28:00 GMT", "version": "v1" } ]
2022-04-15
[ [ "Maeda", "Guilherme", "" ] ]
Movement primitives have the property to accommodate changes in the robot state while maintaining attraction to the original policy. As such, we investigate the use of primitives as a blending mechanism by considering that state deviations from the original policy are caused by user inputs. As the primitive recovers from the user input, it implicitly blends human and robot policies without requiring their weightings -- referred to as arbitration. In this paper, we adopt Dynamical Movement Primitives (DMPs), which allow us to avoid the need for multiple demonstrations, and are fast enough to enable numerous instantiations, one for each hypothesis of the human intent. User studies are presented on assisted teleoperation tasks of reaching multiple goals and dynamic obstacle avoidance. Comparable performance to conventional teleoperation was achieved while significantly decreasing human intervention, often by more than 60%.
2205.01626
Geoffrey Bomarito
G.F. Bomarito and P.E. Leser and N.C.M Strauss and K.M. Garbrecht and J.D. Hochhalter
Automated Learning of Interpretable Models with Quantified Uncertainty
null
null
10.1016/j.cma.2022.115732
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interpretability and uncertainty quantification in machine learning can provide justification for decisions, promote scientific discovery and lead to a better understanding of model behavior. Symbolic regression provides inherently interpretable machine learning, but relatively little work has focused on the use of symbolic regression on noisy data and the accompanying necessity to quantify uncertainty. A new Bayesian framework for genetic-programming-based symbolic regression (GPSR) is introduced that uses model evidence (i.e., marginal likelihood) to formulate replacement probability during the selection phase of evolution. Model parameter uncertainty is automatically quantified, enabling probabilistic predictions with each equation produced by the GPSR algorithm. Model evidence is also quantified in this process, and its use is shown to increase interpretability, improve robustness to noise, and reduce overfitting when compared to a conventional GPSR implementation on both numerical and physical experiments.
[ { "created": "Tue, 12 Apr 2022 19:56:42 GMT", "version": "v1" } ]
2022-11-23
[ [ "Bomarito", "G. F.", "" ], [ "Leser", "P. E.", "" ], [ "Strauss", "N. C. M", "" ], [ "Garbrecht", "K. M.", "" ], [ "Hochhalter", "J. D.", "" ] ]
Interpretability and uncertainty quantification in machine learning can provide justification for decisions, promote scientific discovery and lead to a better understanding of model behavior. Symbolic regression provides inherently interpretable machine learning, but relatively little work has focused on the use of symbolic regression on noisy data and the accompanying necessity to quantify uncertainty. A new Bayesian framework for genetic-programming-based symbolic regression (GPSR) is introduced that uses model evidence (i.e., marginal likelihood) to formulate replacement probability during the selection phase of evolution. Model parameter uncertainty is automatically quantified, enabling probabilistic predictions with each equation produced by the GPSR algorithm. Model evidence is also quantified in this process, and its use is shown to increase interpretability, improve robustness to noise, and reduce overfitting when compared to a conventional GPSR implementation on both numerical and physical experiments.
2010.09921
Jun Yu
Cheng Meng and Jun Yu and Jingyi Zhang and Ping Ma and Wenxuan Zhong
Sufficient dimension reduction for classification using principal optimal transport direction
18 pages, 4 figures, to be published in 34th Conference on Neural Information Processing Systems (NeurIPS 2020), add the supplementary material
null
null
null
cs.LG stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach. Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response. To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport. The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories. The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport. We study the asymptotic properties of POTD and show that in the cases when the class labels contain no error, POTD estimates the SDR subspace exclusively. Empirical studies show POTD outperforms most of the state-of-the-art linear dimension reduction methods.
[ { "created": "Mon, 19 Oct 2020 23:38:31 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2020 01:48:14 GMT", "version": "v2" }, { "created": "Tue, 24 Nov 2020 04:34:24 GMT", "version": "v3" }, { "created": "Tue, 2 Feb 2021 04:10:15 GMT", "version": "v4" } ]
2021-02-03
[ [ "Meng", "Cheng", "" ], [ "Yu", "Jun", "" ], [ "Zhang", "Jingyi", "" ], [ "Ma", "Ping", "" ], [ "Zhong", "Wenxuan", "" ] ]
Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach. Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response. To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport. The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories. The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport. We study the asymptotic properties of POTD and show that in the cases when the class labels contain no error, POTD estimates the SDR subspace exclusively. Empirical studies show POTD outperforms most of the state-of-the-art linear dimension reduction methods.
1705.07771
Kang Wang
Kang Wang, Xueqian Wang, Gang Li
Simulation Experiment of BCI Based on Imagined Speech EEG Decoding
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain Computer Interface (BCI) can help patients of neuromuscular diseases restore parts of the movement and communication abilities that they have lost. Most of BCIs rely on mapping brain activities to device instructions, but limited number of brain activities decides the limited abilities of BCIs. To deal with the problem of limited ablility of BCI, this paper verified the feasibility of constructing BCI based on decoding imagined speech electroencephalography (EEG). As sentences decoded from EEG can have rich meanings, BCIs based on EEG decoding can achieve numerous control instructions. By combining a modified EEG feature extraction mehtod with connectionist temporal classification (CTC), this paper simulated decoding imagined speech EEG using synthetic EEG data without help of speech signal. The performance of decoding model over synthetic data to a certain extent demonstrated the feasibility of constructing BCI based on imagined speech brain signal.
[ { "created": "Mon, 22 May 2017 14:34:20 GMT", "version": "v1" } ]
2017-05-23
[ [ "Wang", "Kang", "" ], [ "Wang", "Xueqian", "" ], [ "Li", "Gang", "" ] ]
Brain Computer Interface (BCI) can help patients of neuromuscular diseases restore parts of the movement and communication abilities that they have lost. Most of BCIs rely on mapping brain activities to device instructions, but limited number of brain activities decides the limited abilities of BCIs. To deal with the problem of limited ablility of BCI, this paper verified the feasibility of constructing BCI based on decoding imagined speech electroencephalography (EEG). As sentences decoded from EEG can have rich meanings, BCIs based on EEG decoding can achieve numerous control instructions. By combining a modified EEG feature extraction mehtod with connectionist temporal classification (CTC), this paper simulated decoding imagined speech EEG using synthetic EEG data without help of speech signal. The performance of decoding model over synthetic data to a certain extent demonstrated the feasibility of constructing BCI based on imagined speech brain signal.
2207.03622
Hazim Shakhatreh
Hazim Shakhatreh, Ahmad Sawalmeh, Ali H Alenezi, Sharief Abdel-Razeq, Muhannad Almutiry, Ala Al-Fuqaha
Mobile-IRS Assisted Next Generation UAV Communication Networks
11 pages, 8 figures
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Prior research on intelligent reflection surface (IRS)-assisted unmanned aerial vehicle (UAV) communications has focused on a fixed location for the IRS or mounted on a UAV. The assumption that the IRS is located at a fixed position will prohibit mobile users from maximizing many wireless network benefits, such as data rate and coverage. Furthermore, assuming that the IRS is placed on a UAV is impractical for various reasons, including the IRS's weight and size and the speed of wind in severe weather. Unlike previous studies, this study assumes a single UAV and an IRS mounted on a mobile ground vehicle (M-IRS) to be deployed in an Internet-of-Things (IoT) 6G wireless network to maximize the average data rate. Such a methodology for providing wireless coverage using an M-IRS assisted UAV system is expected in smart cities. In this paper, we formulate an optimization problem to find an efficient trajectory for the UAV, an efficient path for the M-IRS, and users' power allocation coefficients that maximize the average data rate for mobile ground users. Due to its intractability, we propose efficient techniques that can help in finding the solution to the optimization problem. First, we show that our dynamic power allocation technique outperforms the fixed power allocation technique in terms of network average sum rate. Then we employ the individual movement model (Random Waypoint Model) in order to represent the users' movements inside the coverage area. Finally, we propose an efficient approach using a Genetic Algorithm (GA) for finding an efficient trajectory for the UAV, and an efficient path for the M-IRS to provide wireless connectivity for mobile users during their movement. We demonstrate through simulations that our methodology can enhance the average data rate by 15\% on average compared with the static IRS and by 25\% on average compared without the IRS system.
[ { "created": "Fri, 8 Jul 2022 00:06:06 GMT", "version": "v1" } ]
2022-07-11
[ [ "Shakhatreh", "Hazim", "" ], [ "Sawalmeh", "Ahmad", "" ], [ "Alenezi", "Ali H", "" ], [ "Abdel-Razeq", "Sharief", "" ], [ "Almutiry", "Muhannad", "" ], [ "Al-Fuqaha", "Ala", "" ] ]
Prior research on intelligent reflection surface (IRS)-assisted unmanned aerial vehicle (UAV) communications has focused on a fixed location for the IRS or mounted on a UAV. The assumption that the IRS is located at a fixed position will prohibit mobile users from maximizing many wireless network benefits, such as data rate and coverage. Furthermore, assuming that the IRS is placed on a UAV is impractical for various reasons, including the IRS's weight and size and the speed of wind in severe weather. Unlike previous studies, this study assumes a single UAV and an IRS mounted on a mobile ground vehicle (M-IRS) to be deployed in an Internet-of-Things (IoT) 6G wireless network to maximize the average data rate. Such a methodology for providing wireless coverage using an M-IRS assisted UAV system is expected in smart cities. In this paper, we formulate an optimization problem to find an efficient trajectory for the UAV, an efficient path for the M-IRS, and users' power allocation coefficients that maximize the average data rate for mobile ground users. Due to its intractability, we propose efficient techniques that can help in finding the solution to the optimization problem. First, we show that our dynamic power allocation technique outperforms the fixed power allocation technique in terms of network average sum rate. Then we employ the individual movement model (Random Waypoint Model) in order to represent the users' movements inside the coverage area. Finally, we propose an efficient approach using a Genetic Algorithm (GA) for finding an efficient trajectory for the UAV, and an efficient path for the M-IRS to provide wireless connectivity for mobile users during their movement. We demonstrate through simulations that our methodology can enhance the average data rate by 15\% on average compared with the static IRS and by 25\% on average compared without the IRS system.
1901.07521
Ali Baheri
Ali Baheri and Chris Vermillion
Economically Efficient Combined Plant and Controller Design Using Batch Bayesian Optimization: Mathematical Framework and Airborne Wind Energy Case Study
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel data-driven nested optimization framework that addresses the problem of coupling between plant and controller optimization. This optimization strategy is tailored towards instances where a closed-form expression for the system dynamic response is unobtainable and simulations or experiments are necessary. Specifically, Bayesian Optimization, which is a data-driven technique for finding the optimum of an unknown and expensive-to-evaluate objective function, is employed to solve a nested optimization problem. The underlying objective function is modeled by a Gaussian Process (GP); then, Bayesian Optimization utilizes the predictive uncertainty information from the GP to determine the best subsequent control or plant parameters. The proposed framework differs from the majority of co-design literature where there exists a closed-form model of the system dynamics. Furthermore, we utilize the idea of Batch Bayesian Optimization at the plant optimization level to generate a set of plant designs at each iteration of the overall optimization process, recognizing that there will exist economies of scale in running multiple experiments in each iteration of the plant design process. We validate the proposed framework for a Buoyant Airborne Turbine (BAT). We choose the horizontal stabilizer area, longitudinal center of mass relative to center of buoyancy (plant parameters), and the pitch angle set-point (controller parameter) as our decision variables. Our results demonstrate that these plant and control parameters converge to their respective optimal values within only a few iterations.
[ { "created": "Tue, 22 Jan 2019 18:52:41 GMT", "version": "v1" } ]
2019-01-23
[ [ "Baheri", "Ali", "" ], [ "Vermillion", "Chris", "" ] ]
We present a novel data-driven nested optimization framework that addresses the problem of coupling between plant and controller optimization. This optimization strategy is tailored towards instances where a closed-form expression for the system dynamic response is unobtainable and simulations or experiments are necessary. Specifically, Bayesian Optimization, which is a data-driven technique for finding the optimum of an unknown and expensive-to-evaluate objective function, is employed to solve a nested optimization problem. The underlying objective function is modeled by a Gaussian Process (GP); then, Bayesian Optimization utilizes the predictive uncertainty information from the GP to determine the best subsequent control or plant parameters. The proposed framework differs from the majority of co-design literature where there exists a closed-form model of the system dynamics. Furthermore, we utilize the idea of Batch Bayesian Optimization at the plant optimization level to generate a set of plant designs at each iteration of the overall optimization process, recognizing that there will exist economies of scale in running multiple experiments in each iteration of the plant design process. We validate the proposed framework for a Buoyant Airborne Turbine (BAT). We choose the horizontal stabilizer area, longitudinal center of mass relative to center of buoyancy (plant parameters), and the pitch angle set-point (controller parameter) as our decision variables. Our results demonstrate that these plant and control parameters converge to their respective optimal values within only a few iterations.
2210.11987
Marco Gaido
Marco Gaido, Sara Papi, Matteo Negri, Marco Turchi
Joint Speech Translation and Named Entity Recognition
Accepted at INTERSPEECH 2023
null
10.21437/Interspeech.2023-1767
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Modern automatic translation systems aim at place the human at the center by providing contextual support and knowledge. In this context, a critical task is enriching the output with information regarding the mentioned entities, which is currently achieved processing the generated translation with named entity recognition (NER) and entity linking systems. In light of the recent promising results shown by direct speech translation (ST) models and the known weaknesses of cascades (error propagation and additional latency), in this paper we propose multitask models that jointly perform ST and NER, and compare them with a cascade baseline. The experimental results show that our models significantly outperform the cascade on the NER task (by 0.4-1.0 F1), without degradation in terms of translation quality, and with the same computational efficiency of a plain direct ST model.
[ { "created": "Fri, 21 Oct 2022 14:24:46 GMT", "version": "v1" }, { "created": "Sat, 20 May 2023 14:33:35 GMT", "version": "v2" } ]
2023-10-09
[ [ "Gaido", "Marco", "" ], [ "Papi", "Sara", "" ], [ "Negri", "Matteo", "" ], [ "Turchi", "Marco", "" ] ]
Modern automatic translation systems aim at place the human at the center by providing contextual support and knowledge. In this context, a critical task is enriching the output with information regarding the mentioned entities, which is currently achieved processing the generated translation with named entity recognition (NER) and entity linking systems. In light of the recent promising results shown by direct speech translation (ST) models and the known weaknesses of cascades (error propagation and additional latency), in this paper we propose multitask models that jointly perform ST and NER, and compare them with a cascade baseline. The experimental results show that our models significantly outperform the cascade on the NER task (by 0.4-1.0 F1), without degradation in terms of translation quality, and with the same computational efficiency of a plain direct ST model.
2012.05836
Tiago Eugenio de Melo
Tiago de Melo
User Questions from Tweets on COVID-19: An Exploratory Study
in Portuguese
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Social media platforms, such as Twitter, provide a suitable avenue for users (people or patients) concerned on health questions to discuss and share information with each other. In December 2019, a few coronavirus disease cases were first reported in China. Soon after, the World Health Organization (WHO) declared a state of emergency due to the rapid spread of the virus in other parts of the world. In this work, we used automated extraction of COVID-19 discussion from Twitter and a natural language processing (NLP) method based on topic modeling to discover the main questions related to COVID-19 from tweets. Moreover, we created a Named Entity Recognition (NER) model to identify the main entities of four different categories: disease, drug, person, and organization. Our findings can help policy makers and health care organizations to understand the issues of people on COVID-19 and it can be used to address them appropriately.
[ { "created": "Fri, 20 Nov 2020 12:29:55 GMT", "version": "v1" } ]
2020-12-11
[ [ "de Melo", "Tiago", "" ] ]
Social media platforms, such as Twitter, provide a suitable avenue for users (people or patients) concerned on health questions to discuss and share information with each other. In December 2019, a few coronavirus disease cases were first reported in China. Soon after, the World Health Organization (WHO) declared a state of emergency due to the rapid spread of the virus in other parts of the world. In this work, we used automated extraction of COVID-19 discussion from Twitter and a natural language processing (NLP) method based on topic modeling to discover the main questions related to COVID-19 from tweets. Moreover, we created a Named Entity Recognition (NER) model to identify the main entities of four different categories: disease, drug, person, and organization. Our findings can help policy makers and health care organizations to understand the issues of people on COVID-19 and it can be used to address them appropriately.
2308.02357
Sanju Tiwari Dr
Nandana Mihindukulasooriya, Sanju Tiwari, Carlos F. Enguix, Kusum Lata
Text2KGBench: A Benchmark for Ontology-Driven Knowledge Graph Generation from Text
15 pages, 3 figures, 4 tables. Accepted at ISWC 2023 (Resources Track)
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The recent advances in large language models (LLM) and foundation models with emergent capabilities have been shown to improve the performance of many NLP tasks. LLMs and Knowledge Graphs (KG) can complement each other such that LLMs can be used for KG construction or completion while existing KGs can be used for different tasks such as making LLM outputs explainable or fact-checking in Neuro-Symbolic manner. In this paper, we present Text2KGBench, a benchmark to evaluate the capabilities of language models to generate KGs from natural language text guided by an ontology. Given an input ontology and a set of sentences, the task is to extract facts from the text while complying with the given ontology (concepts, relations, domain/range constraints) and being faithful to the input sentences. We provide two datasets (i) Wikidata-TekGen with 10 ontologies and 13,474 sentences and (ii) DBpedia-WebNLG with 19 ontologies and 4,860 sentences. We define seven evaluation metrics to measure fact extraction performance, ontology conformance, and hallucinations by LLMs. Furthermore, we provide results for two baseline models, Vicuna-13B and Alpaca-LoRA-13B using automatic prompt generation from test cases. The baseline results show that there is room for improvement using both Semantic Web and Natural Language Processing techniques.
[ { "created": "Fri, 4 Aug 2023 14:47:15 GMT", "version": "v1" } ]
2023-08-07
[ [ "Mihindukulasooriya", "Nandana", "" ], [ "Tiwari", "Sanju", "" ], [ "Enguix", "Carlos F.", "" ], [ "Lata", "Kusum", "" ] ]
The recent advances in large language models (LLM) and foundation models with emergent capabilities have been shown to improve the performance of many NLP tasks. LLMs and Knowledge Graphs (KG) can complement each other such that LLMs can be used for KG construction or completion while existing KGs can be used for different tasks such as making LLM outputs explainable or fact-checking in Neuro-Symbolic manner. In this paper, we present Text2KGBench, a benchmark to evaluate the capabilities of language models to generate KGs from natural language text guided by an ontology. Given an input ontology and a set of sentences, the task is to extract facts from the text while complying with the given ontology (concepts, relations, domain/range constraints) and being faithful to the input sentences. We provide two datasets (i) Wikidata-TekGen with 10 ontologies and 13,474 sentences and (ii) DBpedia-WebNLG with 19 ontologies and 4,860 sentences. We define seven evaluation metrics to measure fact extraction performance, ontology conformance, and hallucinations by LLMs. Furthermore, we provide results for two baseline models, Vicuna-13B and Alpaca-LoRA-13B using automatic prompt generation from test cases. The baseline results show that there is room for improvement using both Semantic Web and Natural Language Processing techniques.
1906.06606
Yair Feldman
Yair Feldman, Ran El-Yaniv
Multi-Hop Paragraph Retrieval for Open-Domain Question Answering
ACL 2019
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with the task of multi-hop open-domain Question Answering (QA). This task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching. We present a method for retrieving multiple supporting paragraphs, nested amidst a large knowledge base, which contain the necessary evidence to answer a given question. Our method iteratively retrieves supporting paragraphs by forming a joint vector representation of both a question and a paragraph. The retrieval is performed by considering contextualized sentence-level representations of the paragraphs in the knowledge source. Our method achieves state-of-the-art performance over two well-known datasets, SQuAD-Open and HotpotQA, which serve as our single- and multi-hop open-domain QA benchmarks, respectively.
[ { "created": "Sat, 15 Jun 2019 19:17:10 GMT", "version": "v1" } ]
2019-06-18
[ [ "Feldman", "Yair", "" ], [ "El-Yaniv", "Ran", "" ] ]
This paper is concerned with the task of multi-hop open-domain Question Answering (QA). This task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching. We present a method for retrieving multiple supporting paragraphs, nested amidst a large knowledge base, which contain the necessary evidence to answer a given question. Our method iteratively retrieves supporting paragraphs by forming a joint vector representation of both a question and a paragraph. The retrieval is performed by considering contextualized sentence-level representations of the paragraphs in the knowledge source. Our method achieves state-of-the-art performance over two well-known datasets, SQuAD-Open and HotpotQA, which serve as our single- and multi-hop open-domain QA benchmarks, respectively.
1707.00587
Paul Jaeger
Fabian Isensee, Paul Jaeger, Peter M. Full, Ivo Wolf, Sandy Engelhardt and Klaus H. Maier-Hein
Automatic Cardiac Disease Assessment on cine-MRI via Time-Series Segmentation and Domain Specific Features
To appear in the STACOM 2017 proceedings
null
10.1007/978-3-319-75541-0
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiac magnetic resonance imaging improves on diagnosis of cardiovascular diseases by providing images at high spatiotemporal resolution. Manual evaluation of these time-series, however, is expensive and prone to biased and non-reproducible outcomes. In this paper, we present a method that addresses named limitations by integrating segmentation and disease classification into a fully automatic processing pipeline. We use an ensemble of UNet inspired architectures for segmentation of cardiac structures such as the left and right ventricular cavity (LVC, RVC) and the left ventricular myocardium (LVM) on each time instance of the cardiac cycle. For the classification task, information is extracted from the segmented time-series in form of comprehensive features handcrafted to reflect diagnostic clinical procedures. Based on these features we train an ensemble of heavily regularized multilayer perceptrons (MLP) and a random forest classifier to predict the pathologic target class. We evaluated our method on the ACDC dataset (4 pathology groups, 1 healthy group) and achieve dice scores of 0.945 (LVC), 0.908 (RVC) and 0.905 (LVM) in a cross-validation over the training set (100 cases) and 0.950 (LVC), 0.923 (RVC) and 0.911 (LVM) on the test set (50 cases). We report a classification accuracy of 94% on a training set cross-validation and 92% on the test set. Our results underpin the potential of machine learning methods for accurate, fast and reproducible segmentation and computer-assisted diagnosis (CAD).
[ { "created": "Mon, 3 Jul 2017 15:10:30 GMT", "version": "v1" }, { "created": "Thu, 25 Jan 2018 08:15:24 GMT", "version": "v2" } ]
2018-04-06
[ [ "Isensee", "Fabian", "" ], [ "Jaeger", "Paul", "" ], [ "Full", "Peter M.", "" ], [ "Wolf", "Ivo", "" ], [ "Engelhardt", "Sandy", "" ], [ "Maier-Hein", "Klaus H.", "" ] ]
Cardiac magnetic resonance imaging improves on diagnosis of cardiovascular diseases by providing images at high spatiotemporal resolution. Manual evaluation of these time-series, however, is expensive and prone to biased and non-reproducible outcomes. In this paper, we present a method that addresses named limitations by integrating segmentation and disease classification into a fully automatic processing pipeline. We use an ensemble of UNet inspired architectures for segmentation of cardiac structures such as the left and right ventricular cavity (LVC, RVC) and the left ventricular myocardium (LVM) on each time instance of the cardiac cycle. For the classification task, information is extracted from the segmented time-series in form of comprehensive features handcrafted to reflect diagnostic clinical procedures. Based on these features we train an ensemble of heavily regularized multilayer perceptrons (MLP) and a random forest classifier to predict the pathologic target class. We evaluated our method on the ACDC dataset (4 pathology groups, 1 healthy group) and achieve dice scores of 0.945 (LVC), 0.908 (RVC) and 0.905 (LVM) in a cross-validation over the training set (100 cases) and 0.950 (LVC), 0.923 (RVC) and 0.911 (LVM) on the test set (50 cases). We report a classification accuracy of 94% on a training set cross-validation and 92% on the test set. Our results underpin the potential of machine learning methods for accurate, fast and reproducible segmentation and computer-assisted diagnosis (CAD).
1503.07376
Rafael Cisneros
R. Cisneros, M. Pirro, G. Bergna, R. Ortega, G. Ippoliti, M. Molinas
Global Tracking Passivity--based PI Control of Bilinear Systems and its Application to the Boost and Modular Multilevel Converters
9 pages, 10 figures
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the problem of trajectory tracking of a class of bilinear systems with time--varying measurable disturbance. A set of matrices {A,B_i} has been identified, via a linear matrix inequality, for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time--varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the boost and the modular multilevel converter for which experimental results are given.
[ { "created": "Wed, 25 Mar 2015 13:53:00 GMT", "version": "v1" } ]
2015-03-26
[ [ "Cisneros", "R.", "" ], [ "Pirro", "M.", "" ], [ "Bergna", "G.", "" ], [ "Ortega", "R.", "" ], [ "Ippoliti", "G.", "" ], [ "Molinas", "M.", "" ] ]
This paper deals with the problem of trajectory tracking of a class of bilinear systems with time--varying measurable disturbance. A set of matrices {A,B_i} has been identified, via a linear matrix inequality, for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time--varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the boost and the modular multilevel converter for which experimental results are given.
2305.12542
Zachary Yang
Zachary Yang, Yasmine Maricar, MohammadReza Davari, Nicolas Grenon-Godbout, Reihaneh Rabbany
ToxBuster: In-game Chat Toxicity Buster with BERT
11 pages, 3 figures
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
Detecting toxicity in online spaces is challenging and an ever more pressing problem given the increase in social media and gaming consumption. We introduce ToxBuster, a simple and scalable model trained on a relatively large dataset of 194k lines of game chat from Rainbow Six Siege and For Honor, carefully annotated for different kinds of toxicity. Compared to the existing state-of-the-art, ToxBuster achieves 82.95% (+7) in precision and 83.56% (+57) in recall. This improvement is obtained by leveraging past chat history and metadata. We also study the implication towards real-time and post-game moderation as well as the model transferability from one game to another.
[ { "created": "Sun, 21 May 2023 18:53:26 GMT", "version": "v1" } ]
2023-05-24
[ [ "Yang", "Zachary", "" ], [ "Maricar", "Yasmine", "" ], [ "Davari", "MohammadReza", "" ], [ "Grenon-Godbout", "Nicolas", "" ], [ "Rabbany", "Reihaneh", "" ] ]
Detecting toxicity in online spaces is challenging and an ever more pressing problem given the increase in social media and gaming consumption. We introduce ToxBuster, a simple and scalable model trained on a relatively large dataset of 194k lines of game chat from Rainbow Six Siege and For Honor, carefully annotated for different kinds of toxicity. Compared to the existing state-of-the-art, ToxBuster achieves 82.95% (+7) in precision and 83.56% (+57) in recall. This improvement is obtained by leveraging past chat history and metadata. We also study the implication towards real-time and post-game moderation as well as the model transferability from one game to another.
2007.10871
Christian Hesch
Maik Dittman, Jonathan Schult, Felix Schmidt and Christian Hesch
A strain-gradient formulation for fiber reinforced polymers: Hybrid phase-field model for porous-ductile fracture
null
null
10.1007/s00466-021-02018-0
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel numerical approach to analyze the mechanical behavior within composite materials including the inelastic regime up to final failure is presented. Therefore, a second-gradient theory is combined with phase-field methods to fracture. In particular, we assume that the polymeric matrix material undergoes ductile fracture, whereas continuously embedded fibers undergo brittle fracture as it is typical e.g. for roving glass reinforced thermoplastics. A hybrid phase-field approach is developed and applied along with a modified Gurson-Tvergaard-Needelman GTN-type plasticity model accounting for a temperature-dependent growth of voids on microscale. The mechanical response of the arising microstructure of the woven fabric gives rise to additional higher-order terms, representing homogenized bending contributions of the fibers. Eventually, a series of tests is conducted for this physically comprehensive multifield formulation to investigate different kinds and sequences of failure within long fiber reinforced polymers.
[ { "created": "Tue, 21 Jul 2020 14:55:50 GMT", "version": "v1" }, { "created": "Mon, 19 Apr 2021 11:11:24 GMT", "version": "v2" } ]
2021-04-20
[ [ "Dittman", "Maik", "" ], [ "Schult", "Jonathan", "" ], [ "Schmidt", "Felix", "" ], [ "Hesch", "Christian", "" ] ]
A novel numerical approach to analyze the mechanical behavior within composite materials including the inelastic regime up to final failure is presented. Therefore, a second-gradient theory is combined with phase-field methods to fracture. In particular, we assume that the polymeric matrix material undergoes ductile fracture, whereas continuously embedded fibers undergo brittle fracture as it is typical e.g. for roving glass reinforced thermoplastics. A hybrid phase-field approach is developed and applied along with a modified Gurson-Tvergaard-Needelman GTN-type plasticity model accounting for a temperature-dependent growth of voids on microscale. The mechanical response of the arising microstructure of the woven fabric gives rise to additional higher-order terms, representing homogenized bending contributions of the fibers. Eventually, a series of tests is conducted for this physically comprehensive multifield formulation to investigate different kinds and sequences of failure within long fiber reinforced polymers.
2406.16501
Alvaro Lopez Pellicer
Alvaro Lopez Pellicer, Kittipos Giatgong, Yi Li, Neeraj Suri, Plamen Angelov
UNICAD: A Unified Approach for Attack Detection, Noise Reduction and Novel Class Identification
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
As the use of Deep Neural Networks (DNNs) becomes pervasive, their vulnerability to adversarial attacks and limitations in handling unseen classes poses significant challenges. The state-of-the-art offers discrete solutions aimed to tackle individual issues covering specific adversarial attack scenarios, classification or evolving learning. However, real-world systems need to be able to detect and recover from a wide range of adversarial attacks without sacrificing classification accuracy and to flexibly act in {\bf unseen} scenarios. In this paper, UNICAD, is proposed as a novel framework that integrates a variety of techniques to provide an adaptive solution. For the targeted image classification, UNICAD achieves accurate image classification, detects unseen classes, and recovers from adversarial attacks using Prototype and Similarity-based DNNs with denoising autoencoders. Our experiments performed on the CIFAR-10 dataset highlight UNICAD's effectiveness in adversarial mitigation and unseen class classification, outperforming traditional models.
[ { "created": "Mon, 24 Jun 2024 10:10:03 GMT", "version": "v1" } ]
2024-06-25
[ [ "Pellicer", "Alvaro Lopez", "" ], [ "Giatgong", "Kittipos", "" ], [ "Li", "Yi", "" ], [ "Suri", "Neeraj", "" ], [ "Angelov", "Plamen", "" ] ]
As the use of Deep Neural Networks (DNNs) becomes pervasive, their vulnerability to adversarial attacks and limitations in handling unseen classes poses significant challenges. The state-of-the-art offers discrete solutions aimed to tackle individual issues covering specific adversarial attack scenarios, classification or evolving learning. However, real-world systems need to be able to detect and recover from a wide range of adversarial attacks without sacrificing classification accuracy and to flexibly act in {\bf unseen} scenarios. In this paper, UNICAD, is proposed as a novel framework that integrates a variety of techniques to provide an adaptive solution. For the targeted image classification, UNICAD achieves accurate image classification, detects unseen classes, and recovers from adversarial attacks using Prototype and Similarity-based DNNs with denoising autoencoders. Our experiments performed on the CIFAR-10 dataset highlight UNICAD's effectiveness in adversarial mitigation and unseen class classification, outperforming traditional models.
2401.14255
Yumnah Hasan
Yumnah Hasan, Allan de Lima, Fatemeh Amerehi, Darian Reyes Fernandez de Bulnes, Patrick Healy, and Conor Ryan
Interpretable Solutions for Breast Cancer Diagnosis with Grammatical Evolution and Data Augmentation
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical imaging diagnosis increasingly relies on Machine Learning (ML) models. This is a task that is often hampered by severely imbalanced datasets, where positive cases can be quite rare. Their use is further compromised by their limited interpretability, which is becoming increasingly important. While post-hoc interpretability techniques such as SHAP and LIME have been used with some success on so-called black box models, the use of inherently understandable models makes such endeavors more fruitful. This paper addresses these issues by demonstrating how a relatively new synthetic data generation technique, STEM, can be used to produce data to train models produced by Grammatical Evolution (GE) that are inherently understandable. STEM is a recently introduced combination of the Synthetic Minority Oversampling Technique (SMOTE), Edited Nearest Neighbour (ENN), and Mixup; it has previously been successfully used to tackle both between class and within class imbalance issues. We test our technique on the Digital Database for Screening Mammography (DDSM) and the Wisconsin Breast Cancer (WBC) datasets and compare Area Under the Curve (AUC) results with an ensemble of the top three performing classifiers from a set of eight standard ML classifiers with varying degrees of interpretability. We demonstrate that the GE-derived models present the best AUC while still maintaining interpretable solutions.
[ { "created": "Thu, 25 Jan 2024 15:45:28 GMT", "version": "v1" } ]
2024-01-26
[ [ "Hasan", "Yumnah", "" ], [ "de Lima", "Allan", "" ], [ "Amerehi", "Fatemeh", "" ], [ "de Bulnes", "Darian Reyes Fernandez", "" ], [ "Healy", "Patrick", "" ], [ "Ryan", "Conor", "" ] ]
Medical imaging diagnosis increasingly relies on Machine Learning (ML) models. This is a task that is often hampered by severely imbalanced datasets, where positive cases can be quite rare. Their use is further compromised by their limited interpretability, which is becoming increasingly important. While post-hoc interpretability techniques such as SHAP and LIME have been used with some success on so-called black box models, the use of inherently understandable models makes such endeavors more fruitful. This paper addresses these issues by demonstrating how a relatively new synthetic data generation technique, STEM, can be used to produce data to train models produced by Grammatical Evolution (GE) that are inherently understandable. STEM is a recently introduced combination of the Synthetic Minority Oversampling Technique (SMOTE), Edited Nearest Neighbour (ENN), and Mixup; it has previously been successfully used to tackle both between class and within class imbalance issues. We test our technique on the Digital Database for Screening Mammography (DDSM) and the Wisconsin Breast Cancer (WBC) datasets and compare Area Under the Curve (AUC) results with an ensemble of the top three performing classifiers from a set of eight standard ML classifiers with varying degrees of interpretability. We demonstrate that the GE-derived models present the best AUC while still maintaining interpretable solutions.
2402.19231
Feng Lu
Feng Lu, Xiangyuan Lan, Lijun Zhang, Dongmei Jiang, Yaowei Wang, Chun Yuan
CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition
Accepted by CVPR2024
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past decade, most methods in visual place recognition (VPR) have used neural networks to produce feature representations. These networks typically produce a global representation of a place image using only this image itself and neglect the cross-image variations (e.g. viewpoint and illumination), which limits their robustness in challenging scenes. In this paper, we propose a robust global representation method with cross-image correlation awareness for VPR, named CricaVPR. Our method uses the attention mechanism to correlate multiple images within a batch. These images can be taken in the same place with different conditions or viewpoints, or even captured from different places. Therefore, our method can utilize the cross-image variations as a cue to guide the representation learning, which ensures more robust features are produced. To further facilitate the robustness, we propose a multi-scale convolution-enhanced adaptation method to adapt pre-trained visual foundation models to the VPR task, which introduces the multi-scale local information to further enhance the cross-image correlation-aware representation. Experimental results show that our method outperforms state-of-the-art methods by a large margin with significantly less training time. The code is released at https://github.com/Lu-Feng/CricaVPR.
[ { "created": "Thu, 29 Feb 2024 15:05:11 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 13:16:01 GMT", "version": "v2" } ]
2024-04-02
[ [ "Lu", "Feng", "" ], [ "Lan", "Xiangyuan", "" ], [ "Zhang", "Lijun", "" ], [ "Jiang", "Dongmei", "" ], [ "Wang", "Yaowei", "" ], [ "Yuan", "Chun", "" ] ]
Over the past decade, most methods in visual place recognition (VPR) have used neural networks to produce feature representations. These networks typically produce a global representation of a place image using only this image itself and neglect the cross-image variations (e.g. viewpoint and illumination), which limits their robustness in challenging scenes. In this paper, we propose a robust global representation method with cross-image correlation awareness for VPR, named CricaVPR. Our method uses the attention mechanism to correlate multiple images within a batch. These images can be taken in the same place with different conditions or viewpoints, or even captured from different places. Therefore, our method can utilize the cross-image variations as a cue to guide the representation learning, which ensures more robust features are produced. To further facilitate the robustness, we propose a multi-scale convolution-enhanced adaptation method to adapt pre-trained visual foundation models to the VPR task, which introduces the multi-scale local information to further enhance the cross-image correlation-aware representation. Experimental results show that our method outperforms state-of-the-art methods by a large margin with significantly less training time. The code is released at https://github.com/Lu-Feng/CricaVPR.
1310.3252
Robert Krauthgamer
Alexandr Andoni, Anupam Gupta, Robert Krauthgamer
Towards (1+\epsilon)-Approximate Flow Sparsifiers
Full version of a paper accepted to SODA 2014
null
null
null
cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A useful approach to "compress" a large network $G$ is to represent it with a {\em flow-sparsifier}, i.e., a small network $H$ that supports the same flows as $G$, up to a factor $q \geq 1$ called the quality of sparsifier. Specifically, we assume the network $G$ contains a set of $k$ terminals $T$, shared with the network $H$, i.e., $T\subseteq V(G)\cap V(H)$, and we want $H$ to preserve all multicommodity flows that can be routed between the terminals $T$. The challenge is to construct $H$ that is small. These questions have received a lot of attention in recent years, leading to some known tradeoffs between the sparsifier's quality $q$ and its size $|V(H)|$. Nevertheless, it remains an outstanding question whether every $G$ admits a flow-sparsifier $H$ with quality $q=1+\epsilon$, or even $q=O(1)$, and size $|V(H)|\leq f(k,\epsilon)$ (in particular, independent of $|V(G)|$ and the edge capacities). Making a first step in this direction, we present new constructions for several scenarios: * Our main result is that for quasi-bipartite networks $G$, one can construct a $(1+\epsilon)$-flow-sparsifier of size $\poly(k/\eps)$. In contrast, exact ($q=1$) sparsifiers for this family of networks are known to require size $2^{\Omega(k)}$. * For networks $G$ of bounded treewidth $w$, we construct a flow-sparsifier with quality $q=O(\log w / \log\log w)$ and size $O(w\cdot \poly(k))$. * For general networks $G$, we construct a {\em sketch} $sk(G)$, that stores all the feasible multicommodity flows up to factor $q=1+\eps$, and its size (storage requirement) is $f(k,\epsilon)$.
[ { "created": "Fri, 11 Oct 2013 19:23:21 GMT", "version": "v1" } ]
2013-10-14
[ [ "Andoni", "Alexandr", "" ], [ "Gupta", "Anupam", "" ], [ "Krauthgamer", "Robert", "" ] ]
A useful approach to "compress" a large network $G$ is to represent it with a {\em flow-sparsifier}, i.e., a small network $H$ that supports the same flows as $G$, up to a factor $q \geq 1$ called the quality of sparsifier. Specifically, we assume the network $G$ contains a set of $k$ terminals $T$, shared with the network $H$, i.e., $T\subseteq V(G)\cap V(H)$, and we want $H$ to preserve all multicommodity flows that can be routed between the terminals $T$. The challenge is to construct $H$ that is small. These questions have received a lot of attention in recent years, leading to some known tradeoffs between the sparsifier's quality $q$ and its size $|V(H)|$. Nevertheless, it remains an outstanding question whether every $G$ admits a flow-sparsifier $H$ with quality $q=1+\epsilon$, or even $q=O(1)$, and size $|V(H)|\leq f(k,\epsilon)$ (in particular, independent of $|V(G)|$ and the edge capacities). Making a first step in this direction, we present new constructions for several scenarios: * Our main result is that for quasi-bipartite networks $G$, one can construct a $(1+\epsilon)$-flow-sparsifier of size $\poly(k/\eps)$. In contrast, exact ($q=1$) sparsifiers for this family of networks are known to require size $2^{\Omega(k)}$. * For networks $G$ of bounded treewidth $w$, we construct a flow-sparsifier with quality $q=O(\log w / \log\log w)$ and size $O(w\cdot \poly(k))$. * For general networks $G$, we construct a {\em sketch} $sk(G)$, that stores all the feasible multicommodity flows up to factor $q=1+\eps$, and its size (storage requirement) is $f(k,\epsilon)$.
2108.01358
Jakob Karalus
Jakob Karalus, Felix Lindner
Accelerating the Learning of TAMER with Counterfactual Explanations
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The capability to interactively learn from human feedback would enable agents in new settings. For example, even novice users could train service robots in new tasks naturally and interactively. Human-in-the-loop Reinforcement Learning (HRL) combines human feedback and Reinforcement Learning (RL) techniques. State-of-the-art interactive learning techniques suffer from slow learning speed, thus leading to a frustrating experience for the human. We approach this problem by extending the HRL framework TAMER for evaluative feedback with the possibility to enhance human feedback with two different types of counterfactual explanations (action and state based). We experimentally show that our extensions improve the speed of learning.
[ { "created": "Tue, 3 Aug 2021 08:27:28 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2022 07:59:22 GMT", "version": "v2" } ]
2022-07-28
[ [ "Karalus", "Jakob", "" ], [ "Lindner", "Felix", "" ] ]
The capability to interactively learn from human feedback would enable agents in new settings. For example, even novice users could train service robots in new tasks naturally and interactively. Human-in-the-loop Reinforcement Learning (HRL) combines human feedback and Reinforcement Learning (RL) techniques. State-of-the-art interactive learning techniques suffer from slow learning speed, thus leading to a frustrating experience for the human. We approach this problem by extending the HRL framework TAMER for evaluative feedback with the possibility to enhance human feedback with two different types of counterfactual explanations (action and state based). We experimentally show that our extensions improve the speed of learning.
1803.09179
Matthias Nie{\ss}ner
Andreas R\"ossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nie{\ss}ner
FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces
Video: https://youtu.be/Tle7YaPkO_k
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.
[ { "created": "Sat, 24 Mar 2018 23:12:44 GMT", "version": "v1" } ]
2018-03-28
[ [ "Rössler", "Andreas", "" ], [ "Cozzolino", "Davide", "" ], [ "Verdoliva", "Luisa", "" ], [ "Riess", "Christian", "" ], [ "Thies", "Justus", "" ], [ "Nießner", "Matthias", "" ] ]
With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.
2405.01392
David Maranto
David Maranto
LLMSat: A Large Language Model-Based Goal-Oriented Agent for Autonomous Space Exploration
B.A.Sc thesis
null
null
null
cs.RO cs.AI cs.LG cs.MA physics.space-ph
http://creativecommons.org/licenses/by/4.0/
As spacecraft journey further from Earth with more complex missions, systems of greater autonomy and onboard intelligence are called for. Reducing reliance on human-based mission control becomes increasingly critical if we are to increase our rate of solar-system-wide exploration. Recent work has explored AI-based goal-oriented systems to increase the level of autonomy in mission execution. These systems make use of symbolic reasoning managers to make inferences from the state of a spacecraft and a handcrafted knowledge base, enabling autonomous generation of tasks and re-planning. Such systems have proven to be successful in controlled cases, but they are difficult to implement as they require human-crafted ontological models to allow the spacecraft to understand the world. Reinforcement learning has been applied to train robotic agents to pursue a goal. A new architecture for autonomy is called for. This work explores the application of Large Language Models (LLMs) as the high-level control system of a spacecraft. Using a systems engineering approach, this work presents the design and development of an agentic spacecraft controller by leveraging an LLM as a reasoning engine, to evaluate the utility of such an architecture in achieving higher levels of spacecraft autonomy. A series of deep space mission scenarios simulated within the popular game engine Kerbal Space Program (KSP) are used as case studies to evaluate the implementation against the requirements. It is shown the reasoning and planning abilities of present-day LLMs do not scale well as the complexity of a mission increases, but this can be alleviated with adequate prompting frameworks and strategic selection of the agent's level of authority over the host spacecraft. This research evaluates the potential of LLMs in augmenting autonomous decision-making systems for future robotic space applications.
[ { "created": "Sat, 13 Apr 2024 03:33:17 GMT", "version": "v1" } ]
2024-05-03
[ [ "Maranto", "David", "" ] ]
As spacecraft journey further from Earth with more complex missions, systems of greater autonomy and onboard intelligence are called for. Reducing reliance on human-based mission control becomes increasingly critical if we are to increase our rate of solar-system-wide exploration. Recent work has explored AI-based goal-oriented systems to increase the level of autonomy in mission execution. These systems make use of symbolic reasoning managers to make inferences from the state of a spacecraft and a handcrafted knowledge base, enabling autonomous generation of tasks and re-planning. Such systems have proven to be successful in controlled cases, but they are difficult to implement as they require human-crafted ontological models to allow the spacecraft to understand the world. Reinforcement learning has been applied to train robotic agents to pursue a goal. A new architecture for autonomy is called for. This work explores the application of Large Language Models (LLMs) as the high-level control system of a spacecraft. Using a systems engineering approach, this work presents the design and development of an agentic spacecraft controller by leveraging an LLM as a reasoning engine, to evaluate the utility of such an architecture in achieving higher levels of spacecraft autonomy. A series of deep space mission scenarios simulated within the popular game engine Kerbal Space Program (KSP) are used as case studies to evaluate the implementation against the requirements. It is shown the reasoning and planning abilities of present-day LLMs do not scale well as the complexity of a mission increases, but this can be alleviated with adequate prompting frameworks and strategic selection of the agent's level of authority over the host spacecraft. This research evaluates the potential of LLMs in augmenting autonomous decision-making systems for future robotic space applications.
1410.6277
Sven Banisch
Sven Banisch
The Probabilistic Structure of Discrete Agent-Based Models
null
Discontinuity, Nonlinearity, and Complexity, 3:281--292, 2014
null
null
cs.MA cs.CY nlin.AO physics.comp-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a formalization of agent-based models (ABMs) as random walks on regular graphs and relates the symmetry group of those graphs to a coarse-graining of the ABM that is still Markovian. An ABM in which $N$ agents can be in $\delta$ different states leads to a Markov chain with $\delta^N$ states. In ABMs with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single agent. This characterizes ABMs as random walks on regular graphs. The non-trivial automorphisms of those graphs make visible the dynamical symmetries that an ABM gives rise to because sets of micro configurations can be interchanged without changing the probability structure of the random walk. This allows for a systematic loss-less reduction of the state space of the model.
[ { "created": "Thu, 23 Oct 2014 08:01:35 GMT", "version": "v1" } ]
2014-10-24
[ [ "Banisch", "Sven", "" ] ]
This paper describes a formalization of agent-based models (ABMs) as random walks on regular graphs and relates the symmetry group of those graphs to a coarse-graining of the ABM that is still Markovian. An ABM in which $N$ agents can be in $\delta$ different states leads to a Markov chain with $\delta^N$ states. In ABMs with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single agent. This characterizes ABMs as random walks on regular graphs. The non-trivial automorphisms of those graphs make visible the dynamical symmetries that an ABM gives rise to because sets of micro configurations can be interchanged without changing the probability structure of the random walk. This allows for a systematic loss-less reduction of the state space of the model.
2204.13515
Abdellah El Mekki
Abdellah El Mekki and Abdelkader El Mahdaouy and Mohammed Akallouch and Ismail Berrada and Ahmed Khoumsi
UM6P-CS at SemEval-2022 Task 11: Enhancing Multilingual and Code-Mixed Complex Named Entity Recognition via Pseudo Labels using Multilingual Transformer
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Building real-world complex Named Entity Recognition (NER) systems is a challenging task. This is due to the complexity and ambiguity of named entities that appear in various contexts such as short input sentences, emerging entities, and complex entities. Besides, real-world queries are mostly malformed, as they can be code-mixed or multilingual, among other scenarios. In this paper, we introduce our submitted system to the Multilingual Complex Named Entity Recognition (MultiCoNER) shared task. We approach the complex NER for multilingual and code-mixed queries, by relying on the contextualized representation provided by the multilingual Transformer XLM-RoBERTa. In addition to the CRF-based token classification layer, we incorporate a span classification loss to recognize named entities spans. Furthermore, we use a self-training mechanism to generate weakly-annotated data from a large unlabeled dataset. Our proposed system is ranked 6th and 8th in the multilingual and code-mixed MultiCoNER's tracks respectively.
[ { "created": "Thu, 28 Apr 2022 14:07:06 GMT", "version": "v1" } ]
2022-04-29
[ [ "Mekki", "Abdellah El", "" ], [ "Mahdaouy", "Abdelkader El", "" ], [ "Akallouch", "Mohammed", "" ], [ "Berrada", "Ismail", "" ], [ "Khoumsi", "Ahmed", "" ] ]
Building real-world complex Named Entity Recognition (NER) systems is a challenging task. This is due to the complexity and ambiguity of named entities that appear in various contexts such as short input sentences, emerging entities, and complex entities. Besides, real-world queries are mostly malformed, as they can be code-mixed or multilingual, among other scenarios. In this paper, we introduce our submitted system to the Multilingual Complex Named Entity Recognition (MultiCoNER) shared task. We approach the complex NER for multilingual and code-mixed queries, by relying on the contextualized representation provided by the multilingual Transformer XLM-RoBERTa. In addition to the CRF-based token classification layer, we incorporate a span classification loss to recognize named entities spans. Furthermore, we use a self-training mechanism to generate weakly-annotated data from a large unlabeled dataset. Our proposed system is ranked 6th and 8th in the multilingual and code-mixed MultiCoNER's tracks respectively.
1612.05626
Biplav Srivastava
Biplav Srivastava, Sandeep Sandha, Vaskar Raychoudhury, Sukanya Randhawa, Viral Kapoor, Anmol Agrawal
An Open, Multi-Sensor, Dataset of Water Pollution of Ganga Basin and its Application to Understand Impact of Large Religious Gathering
7 pages
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Water is a crucial pre-requisite for all human activities. Due to growing demand from population and shrinking supply of potable water, there is an urgent need to use computational methods to manage available water intelligently, and especially in developing countries like India where even basic data to track water availability or physical infrastructure to process water are inadequate. In this context, we present a dataset of water pollution containing quantitative and qualitative data from a combination for modalities - real-time sensors, lab results, and estimates from people using mobile apps. The data on our API-accessible cloud platform covers more than 60 locations and consists of both what we have ourselves collected from multiple location following a novel process, and from others (lab-results) which were open but hither-to difficult to access. Further, we discuss an application of released data to understand spatio-temporal pollution impact of a large event with hundreds of millions of people converging on a river during a religious gathering (Ardh Khumbh 2016) spread over months. Such unprecedented details can help authorities manage an ongoing event or plan for future ones. The community can use the data for any application and also contribute new data to the platform.
[ { "created": "Sun, 20 Nov 2016 01:45:36 GMT", "version": "v1" } ]
2016-12-19
[ [ "Srivastava", "Biplav", "" ], [ "Sandha", "Sandeep", "" ], [ "Raychoudhury", "Vaskar", "" ], [ "Randhawa", "Sukanya", "" ], [ "Kapoor", "Viral", "" ], [ "Agrawal", "Anmol", "" ] ]
Water is a crucial pre-requisite for all human activities. Due to growing demand from population and shrinking supply of potable water, there is an urgent need to use computational methods to manage available water intelligently, and especially in developing countries like India where even basic data to track water availability or physical infrastructure to process water are inadequate. In this context, we present a dataset of water pollution containing quantitative and qualitative data from a combination for modalities - real-time sensors, lab results, and estimates from people using mobile apps. The data on our API-accessible cloud platform covers more than 60 locations and consists of both what we have ourselves collected from multiple location following a novel process, and from others (lab-results) which were open but hither-to difficult to access. Further, we discuss an application of released data to understand spatio-temporal pollution impact of a large event with hundreds of millions of people converging on a river during a religious gathering (Ardh Khumbh 2016) spread over months. Such unprecedented details can help authorities manage an ongoing event or plan for future ones. The community can use the data for any application and also contribute new data to the platform.
1506.01501
Nematollah Zarmehi
Nematollah Zarmehi, Morteza Banagar, Mohammad Ali Akhaee
Optimum Decoder for an Additive Video Watermarking with Laplacian Noise in H.264
null
2013 10th International ISC Conference on Information Security and Cryptology (ISCISC),Aug. 2013, pp. 1-5
10.1109/ISCISC.2013.6767352
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate an additive video watermarking method in H.264 standard in presence of the Laplacian noise. In some applications, due to the loss of some pixels or a region of a frame, we resort to Laplacian noise rather than Gaussian one. The embedding is performed in the transform domain; while an optimum and a sub-optimum decoder are derived for the proposed Laplacian model. Simulation results show that the proposed watermarking scheme has suitable performance with enough transparency required for watermarking applications.
[ { "created": "Thu, 4 Jun 2015 08:10:13 GMT", "version": "v1" } ]
2015-06-05
[ [ "Zarmehi", "Nematollah", "" ], [ "Banagar", "Morteza", "" ], [ "Akhaee", "Mohammad Ali", "" ] ]
In this paper, we investigate an additive video watermarking method in H.264 standard in presence of the Laplacian noise. In some applications, due to the loss of some pixels or a region of a frame, we resort to Laplacian noise rather than Gaussian one. The embedding is performed in the transform domain; while an optimum and a sub-optimum decoder are derived for the proposed Laplacian model. Simulation results show that the proposed watermarking scheme has suitable performance with enough transparency required for watermarking applications.
1507.02674
David Harris
David G. Harris, Aravind Srinivasan
Algorithmic and enumerative aspects of the Moser-Tardos distribution
null
ACM Transactions on Algorithms 13(3), Article #33 (2017)
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large.
[ { "created": "Thu, 9 Jul 2015 19:54:36 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2016 17:54:20 GMT", "version": "v2" }, { "created": "Thu, 8 Dec 2016 14:11:34 GMT", "version": "v3" }, { "created": "Thu, 16 Feb 2017 16:02:07 GMT", "version": "v4" } ]
2023-10-13
[ [ "Harris", "David G.", "" ], [ "Srinivasan", "Aravind", "" ] ]
Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large.
1705.05344
Gilwoo Lee
Gilwoo Lee, Siddhartha S. Srinivasa, Matthew T. Mason
GP-ILQG: Data-driven Robust Optimal Control for Uncertain Nonlinear Dynamical Systems
null
null
null
null
cs.RO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As we aim to control complex systems, use of a simulator in model-based reinforcement learning is becoming more common. However, it has been challenging to overcome the Reality Gap, which comes from nonlinear model bias and susceptibility to disturbance. To address these problems, we propose a novel algorithm that combines data-driven system identification approach (Gaussian Process) with a Differential-Dynamic-Programming-based robust optimal control method (Iterative Linear Quadratic Control). Our algorithm uses the simulator's model as the mean function for a Gaussian Process and learns only the difference between the simulator's prediction and actual observations, making it a natural hybrid of simulation and real-world observation. We show that our approach quickly corrects incorrect models, comes up with robust optimal controllers, and transfers its acquired model knowledge to new tasks efficiently.
[ { "created": "Mon, 15 May 2017 17:29:25 GMT", "version": "v1" } ]
2017-05-16
[ [ "Lee", "Gilwoo", "" ], [ "Srinivasa", "Siddhartha S.", "" ], [ "Mason", "Matthew T.", "" ] ]
As we aim to control complex systems, use of a simulator in model-based reinforcement learning is becoming more common. However, it has been challenging to overcome the Reality Gap, which comes from nonlinear model bias and susceptibility to disturbance. To address these problems, we propose a novel algorithm that combines data-driven system identification approach (Gaussian Process) with a Differential-Dynamic-Programming-based robust optimal control method (Iterative Linear Quadratic Control). Our algorithm uses the simulator's model as the mean function for a Gaussian Process and learns only the difference between the simulator's prediction and actual observations, making it a natural hybrid of simulation and real-world observation. We show that our approach quickly corrects incorrect models, comes up with robust optimal controllers, and transfers its acquired model knowledge to new tasks efficiently.
1605.09653
Srikrishna Karanam
Srikrishna Karanam, Mengran Gou, Ziyan Wu, Angels Rates-Borras, Octavia Camps, Richard J. Radke
A Systematic Evaluation and Benchmark for Person Re-Identification: Features, Metrics, and Datasets
Preliminary work on person Re-Id benchmark. S. Karanam and M. Gou contributed equally. 14 pages, 6 figures, 4 tables. For supplementary material, see http://robustsystems.coe.neu.edu/sites/robustsystems.coe.neu.edu/files/systems/supmat/ReID_benchmark_supp.zip
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person re-identification (re-id) is a critical problem in video analytics applications such as security and surveillance. The public release of several datasets and code for vision algorithms has facilitated rapid progress in this area over the last few years. However, directly comparing re-id algorithms reported in the literature has become difficult since a wide variety of features, experimental protocols, and evaluation metrics are employed. In order to address this need, we present an extensive review and performance evaluation of single- and multi-shot re-id algorithms. The experimental protocol incorporates the most recent advances in both feature extraction and metric learning. To ensure a fair comparison, all of the approaches were implemented using a unified code library that includes 11 feature extraction algorithms and 22 metric learning and ranking techniques. All approaches were evaluated using a new large-scale dataset that closely mimics a real-world problem setting, in addition to 16 other publicly available datasets: VIPeR, GRID, CAVIAR, DukeMTMC4ReID, 3DPeS, PRID, V47, WARD, SAIVT-SoftBio, CUHK01, CHUK02, CUHK03, RAiD, iLIDSVID, HDA+ and Market1501. The evaluation codebase and results will be made publicly available for community use.
[ { "created": "Tue, 31 May 2016 15:01:46 GMT", "version": "v1" }, { "created": "Wed, 1 Jun 2016 05:55:46 GMT", "version": "v2" }, { "created": "Tue, 29 Nov 2016 19:50:51 GMT", "version": "v3" }, { "created": "Fri, 18 Aug 2017 03:39:58 GMT", "version": "v4" }, { "created": "Wed, 14 Feb 2018 16:27:31 GMT", "version": "v5" } ]
2018-02-15
[ [ "Karanam", "Srikrishna", "" ], [ "Gou", "Mengran", "" ], [ "Wu", "Ziyan", "" ], [ "Rates-Borras", "Angels", "" ], [ "Camps", "Octavia", "" ], [ "Radke", "Richard J.", "" ] ]
Person re-identification (re-id) is a critical problem in video analytics applications such as security and surveillance. The public release of several datasets and code for vision algorithms has facilitated rapid progress in this area over the last few years. However, directly comparing re-id algorithms reported in the literature has become difficult since a wide variety of features, experimental protocols, and evaluation metrics are employed. In order to address this need, we present an extensive review and performance evaluation of single- and multi-shot re-id algorithms. The experimental protocol incorporates the most recent advances in both feature extraction and metric learning. To ensure a fair comparison, all of the approaches were implemented using a unified code library that includes 11 feature extraction algorithms and 22 metric learning and ranking techniques. All approaches were evaluated using a new large-scale dataset that closely mimics a real-world problem setting, in addition to 16 other publicly available datasets: VIPeR, GRID, CAVIAR, DukeMTMC4ReID, 3DPeS, PRID, V47, WARD, SAIVT-SoftBio, CUHK01, CHUK02, CUHK03, RAiD, iLIDSVID, HDA+ and Market1501. The evaluation codebase and results will be made publicly available for community use.
1903.05879
Christian Graulund
Patrick Bahr, Christian Graulund, Rasmus M{\o}gelberg
Simply RaTT: A Fitch-style Modal Calculus for Reactive Programming without Space Leaks
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions. Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitch-style approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks. We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. These time leaks are eliminated by the Simply RaTT type system.
[ { "created": "Thu, 14 Mar 2019 09:50:15 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2019 13:23:14 GMT", "version": "v2" } ]
2019-06-12
[ [ "Bahr", "Patrick", "" ], [ "Graulund", "Christian", "" ], [ "Møgelberg", "Rasmus", "" ] ]
Functional reactive programming (FRP) is a paradigm for programming with signals and events, allowing the user to describe reactive programs on a high level of abstraction. For this to make sense, an FRP language must ensure that all programs are causal, and can be implemented without introducing space leaks and time leaks. To this end, some FRP languages do not give direct access to signals, but just to signal functions. Recently, modal types have been suggested as an alternative approach to ensuring causality in FRP languages in the synchronous case, giving direct access to the signal and event abstractions. This paper presents Simply RaTT, a new modal calculus for reactive programming. Unlike prior calculi, Simply RaTT uses a Fitch-style approach to modal types, which simplifies the type system and makes programs more concise. Echoing a previous result by Krishnaswami for a different language, we devise an operational semantics that safely executes Simply RaTT programs without space leaks. We also identify a source of time leaks present in other modal FRP languages: The unfolding of fixed points in delayed computations. These time leaks are eliminated by the Simply RaTT type system.
2306.15880
Jianzong Wu
Jianzong Wu, Xiangtai Li, Shilin Xu, Haobo Yuan, Henghui Ding, Yibo Yang, Xia Li, Jiangning Zhang, Yunhai Tong, Xudong Jiang, Bernard Ghanem, Dacheng Tao
Towards Open Vocabulary Learning: A Survey
Accepted by IEEE T-PAMI. Project page: https://github.com/jianzongwu/Awesome-Open-Vocabulary
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of visual scene understanding, deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection. However, most approaches operate on the close-set assumption, meaning that the model can only identify pre-defined categories that are present in the training set. Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training. These new approaches seek to locate and recognize categories beyond the annotated label space. The open vocabulary approach is more general, practical, and effective compared to weakly supervised and zero-shot settings. This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field. In particular, we begin by comparing it to related concepts such as zero-shot learning, open-set recognition, and out-of-distribution detection. Then, we review several closely related tasks in the case of segmentation and detection, including long-tail problems, few-shot, and zero-shot settings. For the method survey, we first present the basic knowledge of detection and segmentation in close-set as the preliminary knowledge. Next, we examine various scenarios in which open vocabulary learning is used, identifying common design elements and core ideas. Then, we compare the recent detection and segmentation approaches in commonly used datasets and benchmarks. Finally, we conclude with insights, issues, and discussions regarding future research directions. To our knowledge, this is the first comprehensive literature review of open vocabulary learning. We keep tracing related works at https://github.com/jianzongwu/Awesome-Open-Vocabulary.
[ { "created": "Wed, 28 Jun 2023 02:33:06 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 10:45:39 GMT", "version": "v2" }, { "created": "Sun, 23 Jul 2023 10:50:18 GMT", "version": "v3" }, { "created": "Thu, 1 Feb 2024 08:31:59 GMT", "version": "v4" } ]
2024-02-02
[ [ "Wu", "Jianzong", "" ], [ "Li", "Xiangtai", "" ], [ "Xu", "Shilin", "" ], [ "Yuan", "Haobo", "" ], [ "Ding", "Henghui", "" ], [ "Yang", "Yibo", "" ], [ "Li", "Xia", "" ], [ "Zhang", "Jiangning", "" ], [ "Tong", "Yunhai", "" ], [ "Jiang", "Xudong", "" ], [ "Ghanem", "Bernard", "" ], [ "Tao", "Dacheng", "" ] ]
In the field of visual scene understanding, deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection. However, most approaches operate on the close-set assumption, meaning that the model can only identify pre-defined categories that are present in the training set. Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training. These new approaches seek to locate and recognize categories beyond the annotated label space. The open vocabulary approach is more general, practical, and effective compared to weakly supervised and zero-shot settings. This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field. In particular, we begin by comparing it to related concepts such as zero-shot learning, open-set recognition, and out-of-distribution detection. Then, we review several closely related tasks in the case of segmentation and detection, including long-tail problems, few-shot, and zero-shot settings. For the method survey, we first present the basic knowledge of detection and segmentation in close-set as the preliminary knowledge. Next, we examine various scenarios in which open vocabulary learning is used, identifying common design elements and core ideas. Then, we compare the recent detection and segmentation approaches in commonly used datasets and benchmarks. Finally, we conclude with insights, issues, and discussions regarding future research directions. To our knowledge, this is the first comprehensive literature review of open vocabulary learning. We keep tracing related works at https://github.com/jianzongwu/Awesome-Open-Vocabulary.
1106.2522
Ersen Ekrem
Ersen Ekrem and Sennur Ulukus
Degrees of Freedom Region of the Gaussian MIMO Broadcast Channel with Common and Private Messages
Submitted to IEEE Transactions on Information Theory, May 2011
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Gaussian multiple-input multiple-output (MIMO) broadcast channel with common and private messages. We obtain the degrees of freedom (DoF) region of this channel. We first show that a parallel Gaussian broadcast channel with unmatched sub-channels can be constructed from any given Gaussian MIMO broadcast channel by using the generalized singular value decomposition (GSVD) and a relaxation on the power constraint for the channel input, in a way that the capacity region of the constructed parallel channel provides an outer bound for the capacity region of the original channel. The capacity region of the parallel Gaussian broadcast channel with unmatched sub-channels is known, using which we obtain an explicit outer bound for the DoF region of the Gaussian MIMO broadcast channel. We finally show that this outer bound for the DoF region can be attained both by the achievable scheme that uses a classical Gaussian coding for the common message and dirty-paper coding (DPC) for the private messages, as well as by a variation of the zero-forcing (ZF) scheme.
[ { "created": "Mon, 13 Jun 2011 19:06:43 GMT", "version": "v1" } ]
2011-06-14
[ [ "Ekrem", "Ersen", "" ], [ "Ulukus", "Sennur", "" ] ]
We consider the Gaussian multiple-input multiple-output (MIMO) broadcast channel with common and private messages. We obtain the degrees of freedom (DoF) region of this channel. We first show that a parallel Gaussian broadcast channel with unmatched sub-channels can be constructed from any given Gaussian MIMO broadcast channel by using the generalized singular value decomposition (GSVD) and a relaxation on the power constraint for the channel input, in a way that the capacity region of the constructed parallel channel provides an outer bound for the capacity region of the original channel. The capacity region of the parallel Gaussian broadcast channel with unmatched sub-channels is known, using which we obtain an explicit outer bound for the DoF region of the Gaussian MIMO broadcast channel. We finally show that this outer bound for the DoF region can be attained both by the achievable scheme that uses a classical Gaussian coding for the common message and dirty-paper coding (DPC) for the private messages, as well as by a variation of the zero-forcing (ZF) scheme.
1808.04107
Sajad Daei Omshi
Sajad Daei, Farzan Haddadi, Arash Amini
Improved Recovery of Analysis Sparse Vectors in Presence of Prior Information
null
null
10.1109/LSP.2018.2886141
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we consider the problem of recovering analysis-sparse signals from under-sampled measurements when some prior information about the support is available. We incorporate such information in the recovery stage by suitably tuning the weights in a weighted $\ell_1$ analysis optimization problem. Indeed, we try to set the weights such that the method succeeds with minimum number of measurements. For this purpose, we exploit the upper-bound on the statistical dimension of a certain cone to determine the weights. Our numerical simulations confirm that the introduced method with tuned weights outperforms the standard $\ell_1$ analysis technique.
[ { "created": "Mon, 13 Aug 2018 08:42:45 GMT", "version": "v1" } ]
2019-01-30
[ [ "Daei", "Sajad", "" ], [ "Haddadi", "Farzan", "" ], [ "Amini", "Arash", "" ] ]
In this work, we consider the problem of recovering analysis-sparse signals from under-sampled measurements when some prior information about the support is available. We incorporate such information in the recovery stage by suitably tuning the weights in a weighted $\ell_1$ analysis optimization problem. Indeed, we try to set the weights such that the method succeeds with minimum number of measurements. For this purpose, we exploit the upper-bound on the statistical dimension of a certain cone to determine the weights. Our numerical simulations confirm that the introduced method with tuned weights outperforms the standard $\ell_1$ analysis technique.
1708.07188
Sanjay Goel
Sanjay Goel, Stephen F. Bush, Carlos Gershenson
Self-Organization in Traffic Lights: Evolution of Signal Control with Advances in Sensors and Communications
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traffic signals are ubiquitous devices that first appeared in 1868. Recent advances in information and communications technology (ICT) have led to unprecedented improvements in such areas as mobile handheld devices (i.e., smartphones), the electric power industry (i.e., smart grids), transportation infrastructure, and vehicle area networks. Given the trend towards interconnectivity, it is only a matter of time before vehicles communicate with one another and with infrastructure. In fact, several pilots of such vehicle-to-vehicle and vehicle-to-infrastructure (e.g. traffic lights and parking spaces) communication systems are already operational. This survey of autonomous and self-organized traffic signaling control has been undertaken with these potential developments in mind. Our research results indicate that, while many sophisticated techniques have attempted to improve the scheduling of traffic signal control, either real-time sensing of traffic patterns or a priori knowledge of traffic flow is required to optimize traffic. Once this is achieved, communication between traffic signals will serve to vastly improve overall traffic efficiency.
[ { "created": "Sun, 18 Jun 2017 21:41:24 GMT", "version": "v1" } ]
2017-08-25
[ [ "Goel", "Sanjay", "" ], [ "Bush", "Stephen F.", "" ], [ "Gershenson", "Carlos", "" ] ]
Traffic signals are ubiquitous devices that first appeared in 1868. Recent advances in information and communications technology (ICT) have led to unprecedented improvements in such areas as mobile handheld devices (i.e., smartphones), the electric power industry (i.e., smart grids), transportation infrastructure, and vehicle area networks. Given the trend towards interconnectivity, it is only a matter of time before vehicles communicate with one another and with infrastructure. In fact, several pilots of such vehicle-to-vehicle and vehicle-to-infrastructure (e.g. traffic lights and parking spaces) communication systems are already operational. This survey of autonomous and self-organized traffic signaling control has been undertaken with these potential developments in mind. Our research results indicate that, while many sophisticated techniques have attempted to improve the scheduling of traffic signal control, either real-time sensing of traffic patterns or a priori knowledge of traffic flow is required to optimize traffic. Once this is achieved, communication between traffic signals will serve to vastly improve overall traffic efficiency.
2102.05692
Mollie Bianchi
Mollie Bianchi and Timothy D. Barfoot
UAV Localization Using Autoencoded Satellite Images
Accepted for publication in RA-L 2021
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and demonstrate a fast, robust method for using satellite images to localize an Unmanned Aerial Vehicle (UAV). Previous work using satellite images has large storage and computation costs and is unable to run in real time. In this work, we collect Google Earth (GE) images for a desired flight path offline and an autoencoder is trained to compress these images to a low-dimensional vector representation while retaining the key features. This trained autoencoder is used to compress a real UAV image, which is then compared to the precollected, nearby, autoencoded GE images using an inner-product kernel. This results in a distribution of weights over the corresponding GE image poses and is used to generate a single localization and associated covariance to represent uncertainty. Our localization is computed in 1% of the time of the current standard and is able to achieve a comparable RMSE of less than 3m in our experiments, where we robustly matched UAV images from six runs spanning the lighting conditions of a single day to the same map of satellite images.
[ { "created": "Wed, 10 Feb 2021 19:08:10 GMT", "version": "v1" } ]
2021-02-12
[ [ "Bianchi", "Mollie", "" ], [ "Barfoot", "Timothy D.", "" ] ]
We propose and demonstrate a fast, robust method for using satellite images to localize an Unmanned Aerial Vehicle (UAV). Previous work using satellite images has large storage and computation costs and is unable to run in real time. In this work, we collect Google Earth (GE) images for a desired flight path offline and an autoencoder is trained to compress these images to a low-dimensional vector representation while retaining the key features. This trained autoencoder is used to compress a real UAV image, which is then compared to the precollected, nearby, autoencoded GE images using an inner-product kernel. This results in a distribution of weights over the corresponding GE image poses and is used to generate a single localization and associated covariance to represent uncertainty. Our localization is computed in 1% of the time of the current standard and is able to achieve a comparable RMSE of less than 3m in our experiments, where we robustly matched UAV images from six runs spanning the lighting conditions of a single day to the same map of satellite images.
2007.06604
Amihood Amir
Amihood Amir and Itai Boneh
Update Query Time Trade-off for dynamic Suffix Arrays
19 pages, 3 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Suffix Array SA(S) of a string S[1 ... n] is an array containing all the suffixes of S sorted by lexicographic order. The suffix array is one of the most well known indexing data structures, and it functions as a key tool in many string algorithms. In this paper, we present a data structure for maintaining the Suffix Array of a dynamic string. For every $0 \leq \varepsilon \leq 1$, our data structure reports SA[i] in $\tilde{O}(n^{\varepsilon})$ time and handles text modification in $\tilde{O}(n^{1-\varepsilon})$ time. Additionally, our data structure enables the same query time for reporting iSA[i], with iSA being the Inverse Suffix Array of S[1 ... n]. Our data structure can be used to construct sub-linear dynamic variants of static strings algorithms or data structures that are based on the Suffix Array and the Inverse Suffix Array.
[ { "created": "Mon, 13 Jul 2020 18:11:19 GMT", "version": "v1" } ]
2020-07-15
[ [ "Amir", "Amihood", "" ], [ "Boneh", "Itai", "" ] ]
The Suffix Array SA(S) of a string S[1 ... n] is an array containing all the suffixes of S sorted by lexicographic order. The suffix array is one of the most well known indexing data structures, and it functions as a key tool in many string algorithms. In this paper, we present a data structure for maintaining the Suffix Array of a dynamic string. For every $0 \leq \varepsilon \leq 1$, our data structure reports SA[i] in $\tilde{O}(n^{\varepsilon})$ time and handles text modification in $\tilde{O}(n^{1-\varepsilon})$ time. Additionally, our data structure enables the same query time for reporting iSA[i], with iSA being the Inverse Suffix Array of S[1 ... n]. Our data structure can be used to construct sub-linear dynamic variants of static strings algorithms or data structures that are based on the Suffix Array and the Inverse Suffix Array.
2108.12630
Shuaicheng Li
Shuaicheng Li, Qianggang Cao, Lingbo Liu, Kunlin Yang, Shinan Liu, Jun Hou and Shuai Yi
GroupFormer: Group Activity Recognition with Clustered Spatial-Temporal Transformer
Accepted at ICCV2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group activity recognition is a crucial yet challenging problem, whose core lies in fully exploring spatial-temporal interactions among individuals and generating reasonable group representations. However, previous methods either model spatial and temporal information separately, or directly aggregate individual features to form group features. To address these issues, we propose a novel group activity recognition network termed GroupFormer. It captures spatial-temporal contextual information jointly to augment the individual and group representations effectively with a clustered spatial-temporal transformer. Specifically, our GroupFormer has three appealing advantages: (1) A tailor-modified Transformer, Clustered Spatial-Temporal Transformer, is proposed to enhance the individual representation and group representation. (2) It models the spatial and temporal dependencies integrally and utilizes decoders to build the bridge between the spatial and temporal information. (3) A clustered attention mechanism is utilized to dynamically divide individuals into multiple clusters for better learning activity-aware semantic representations. Moreover, experimental results show that the proposed framework outperforms state-of-the-art methods on the Volleyball dataset and Collective Activity dataset. Code is available at https://github.com/xueyee/GroupFormer.
[ { "created": "Sat, 28 Aug 2021 11:24:36 GMT", "version": "v1" } ]
2021-08-31
[ [ "Li", "Shuaicheng", "" ], [ "Cao", "Qianggang", "" ], [ "Liu", "Lingbo", "" ], [ "Yang", "Kunlin", "" ], [ "Liu", "Shinan", "" ], [ "Hou", "Jun", "" ], [ "Yi", "Shuai", "" ] ]
Group activity recognition is a crucial yet challenging problem, whose core lies in fully exploring spatial-temporal interactions among individuals and generating reasonable group representations. However, previous methods either model spatial and temporal information separately, or directly aggregate individual features to form group features. To address these issues, we propose a novel group activity recognition network termed GroupFormer. It captures spatial-temporal contextual information jointly to augment the individual and group representations effectively with a clustered spatial-temporal transformer. Specifically, our GroupFormer has three appealing advantages: (1) A tailor-modified Transformer, Clustered Spatial-Temporal Transformer, is proposed to enhance the individual representation and group representation. (2) It models the spatial and temporal dependencies integrally and utilizes decoders to build the bridge between the spatial and temporal information. (3) A clustered attention mechanism is utilized to dynamically divide individuals into multiple clusters for better learning activity-aware semantic representations. Moreover, experimental results show that the proposed framework outperforms state-of-the-art methods on the Volleyball dataset and Collective Activity dataset. Code is available at https://github.com/xueyee/GroupFormer.
2112.01079
Shudong Yang
Shudong Yang (1) ((1) Dalian University of Technology)
Who will dropout from university? Academic risk prediction based on interpretable machine learning
15 pages,7 figures
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the institutional research mode, in order to explore which characteristics are the best indicators for predicting academic risk from the student behavior data sets that have high-dimensional, unbalanced classified small sample, it transforms the academic risk prediction of college students into a binary classification task. It predicts academic risk based on the LightGBM model and the interpretable machine learning method of Shapley value. The simulation results show that from the global perspective of the prediction model, characteristics such as the quality of academic partners, the seating position in classroom, the dormitory study atmosphere, the English scores of the college entrance examination, the quantity of academic partners, the addiction level of video games, the mobility of academic partners, and the degree of truancy are the best 8 predictors for academic risk. It is contrary to intuition that characteristics such as living in campus or not, work-study, lipstick addiction, student leader or not, lover amount, and smoking have little correlation with university academic risk in this experiment. From the local perspective of the sample, the factors affecting academic risk vary from person to person. It can perform personalized interpretable analysis through Shapley values, which cannot be done by traditional mathematical statistical prediction models. The academic contributions of this research are mainly in two aspects: First, the learning interaction networks is proposed for the first time, so that social behavior can be used to compensate for the one-sided individual behavior and improve the performance of academic risk prediction. Second, the introduction of Shapley value calculation makes machine learning that lacks a clear reasoning process visualized, and provides intuitive decision support for education managers.
[ { "created": "Thu, 2 Dec 2021 09:43:31 GMT", "version": "v1" } ]
2021-12-03
[ [ "Yang", "Shudong", "", "Dalian University of Technology" ] ]
In the institutional research mode, in order to explore which characteristics are the best indicators for predicting academic risk from the student behavior data sets that have high-dimensional, unbalanced classified small sample, it transforms the academic risk prediction of college students into a binary classification task. It predicts academic risk based on the LightGBM model and the interpretable machine learning method of Shapley value. The simulation results show that from the global perspective of the prediction model, characteristics such as the quality of academic partners, the seating position in classroom, the dormitory study atmosphere, the English scores of the college entrance examination, the quantity of academic partners, the addiction level of video games, the mobility of academic partners, and the degree of truancy are the best 8 predictors for academic risk. It is contrary to intuition that characteristics such as living in campus or not, work-study, lipstick addiction, student leader or not, lover amount, and smoking have little correlation with university academic risk in this experiment. From the local perspective of the sample, the factors affecting academic risk vary from person to person. It can perform personalized interpretable analysis through Shapley values, which cannot be done by traditional mathematical statistical prediction models. The academic contributions of this research are mainly in two aspects: First, the learning interaction networks is proposed for the first time, so that social behavior can be used to compensate for the one-sided individual behavior and improve the performance of academic risk prediction. Second, the introduction of Shapley value calculation makes machine learning that lacks a clear reasoning process visualized, and provides intuitive decision support for education managers.
2009.08044
Mark Hamilton
Mark Hamilton, Nick Gonsalves, Christina Lee, Anand Raman, Brendan Walsh, Siddhartha Prasad, Dalitso Banda, Lucy Zhang, Mei Gao, Lei Zhang, William T. Freeman
Large-Scale Intelligent Microservices
null
null
10.1109/BigData50022.2020.9378270
null
cs.AI cs.DB cs.DC cs.LG cs.NI
http://creativecommons.org/licenses/by/4.0/
Deploying Machine Learning (ML) algorithms within databases is a challenge due to the varied computational footprints of modern ML algorithms and the myriad of database technologies each with its own restrictive syntax. We introduce an Apache Spark-based micro-service orchestration framework that extends database operations to include web service primitives. Our system can orchestrate web services across hundreds of machines and takes full advantage of cluster, thread, and asynchronous parallelism. Using this framework, we provide large scale clients for intelligent services such as speech, vision, search, anomaly detection, and text analysis. This allows users to integrate ready-to-use intelligence into any datastore with an Apache Spark connector. To eliminate the majority of overhead from network communication, we also introduce a low-latency containerized version of our architecture. Finally, we demonstrate that the services we investigate are competitive on a variety of benchmarks, and present two applications of this framework to create intelligent search engines, and real-time auto race analytics systems.
[ { "created": "Thu, 17 Sep 2020 03:38:28 GMT", "version": "v1" }, { "created": "Thu, 3 Dec 2020 20:51:47 GMT", "version": "v2" }, { "created": "Thu, 2 Dec 2021 20:09:30 GMT", "version": "v3" } ]
2022-03-17
[ [ "Hamilton", "Mark", "" ], [ "Gonsalves", "Nick", "" ], [ "Lee", "Christina", "" ], [ "Raman", "Anand", "" ], [ "Walsh", "Brendan", "" ], [ "Prasad", "Siddhartha", "" ], [ "Banda", "Dalitso", "" ], [ "Zhang", "Lucy", "" ], [ "Gao", "Mei", "" ], [ "Zhang", "Lei", "" ], [ "Freeman", "William T.", "" ] ]
Deploying Machine Learning (ML) algorithms within databases is a challenge due to the varied computational footprints of modern ML algorithms and the myriad of database technologies each with its own restrictive syntax. We introduce an Apache Spark-based micro-service orchestration framework that extends database operations to include web service primitives. Our system can orchestrate web services across hundreds of machines and takes full advantage of cluster, thread, and asynchronous parallelism. Using this framework, we provide large scale clients for intelligent services such as speech, vision, search, anomaly detection, and text analysis. This allows users to integrate ready-to-use intelligence into any datastore with an Apache Spark connector. To eliminate the majority of overhead from network communication, we also introduce a low-latency containerized version of our architecture. Finally, we demonstrate that the services we investigate are competitive on a variety of benchmarks, and present two applications of this framework to create intelligent search engines, and real-time auto race analytics systems.
2306.09385
Rahee Walambe Dr
Rahee Walambe, Pranav Nayak, Ashmit Bhardwaj, Ketan Kotecha
Employing Multimodal Machine Learning for Stress Detection
null
null
10.1155/2021/9356452
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
In the current age, human lifestyle has become more knowledge oriented leading to generation of sedentary employment. This has given rise to a number of health and mental disorders. Mental wellness is one of the most neglected but crucial aspects of today's world. Mental health issues can, both directly and indirectly, affect other sections of human physiology and impede an individual's day-to-day activities and performance. However, identifying the stress and finding the stress trend for an individual leading to serious mental ailments is challenging and involves multiple factors. Such identification can be achieved accurately by fusing these multiple modalities (due to various factors) arising from behavioral patterns. Certain techniques are identified in the literature for this purpose; however, very few machine learning-based methods are proposed for such multimodal fusion tasks. In this work, a multimodal AI-based framework is proposed to monitor a person's working behavior and stress levels. We propose a methodology for efficiently detecting stress due to workload by concatenating heterogeneous raw sensor data streams (e.g., face expressions, posture, heart rate, computer interaction). This data can be securely stored and analyzed to understand and discover personalized unique behavioral patterns leading to mental strain and fatigue. The contribution of this work is twofold; proposing a multimodal AI-based strategy for fusion to detect stress and its level and secondly identify a stress pattern over a period of time. We were able to achieve 96.09% accuracy on the test set in stress detection and classification. Further, we reduce the stress scale prediction model loss to 0.036 using these modalities. This work can prove important for the community at large, specifically those working sedentary jobs to monitor and identify stress levels, especially in current times of COVID-19.
[ { "created": "Thu, 15 Jun 2023 14:34:16 GMT", "version": "v1" } ]
2023-06-19
[ [ "Walambe", "Rahee", "" ], [ "Nayak", "Pranav", "" ], [ "Bhardwaj", "Ashmit", "" ], [ "Kotecha", "Ketan", "" ] ]
In the current age, human lifestyle has become more knowledge oriented leading to generation of sedentary employment. This has given rise to a number of health and mental disorders. Mental wellness is one of the most neglected but crucial aspects of today's world. Mental health issues can, both directly and indirectly, affect other sections of human physiology and impede an individual's day-to-day activities and performance. However, identifying the stress and finding the stress trend for an individual leading to serious mental ailments is challenging and involves multiple factors. Such identification can be achieved accurately by fusing these multiple modalities (due to various factors) arising from behavioral patterns. Certain techniques are identified in the literature for this purpose; however, very few machine learning-based methods are proposed for such multimodal fusion tasks. In this work, a multimodal AI-based framework is proposed to monitor a person's working behavior and stress levels. We propose a methodology for efficiently detecting stress due to workload by concatenating heterogeneous raw sensor data streams (e.g., face expressions, posture, heart rate, computer interaction). This data can be securely stored and analyzed to understand and discover personalized unique behavioral patterns leading to mental strain and fatigue. The contribution of this work is twofold; proposing a multimodal AI-based strategy for fusion to detect stress and its level and secondly identify a stress pattern over a period of time. We were able to achieve 96.09% accuracy on the test set in stress detection and classification. Further, we reduce the stress scale prediction model loss to 0.036 using these modalities. This work can prove important for the community at large, specifically those working sedentary jobs to monitor and identify stress levels, especially in current times of COVID-19.
1906.02134
Jie Wang
Jie Wang, Xinyan Zhao
Theme-aware generation model for chinese lyrics
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With rapid development of neural networks, deep-learning has been extended to various natural language generation fields, such as machine translation, dialogue generation and even literature creation. In this paper, we propose a theme-aware language generation model for Chinese music lyrics, which improves the theme-connectivity and coherence of generated paragraphs greatly. A multi-channel sequence-to-sequence (seq2seq) model encodes themes and previous sentences as global and local contextual information. Moreover, attention mechanism is incorporated for sequence decoding, enabling to fuse context into predicted next texts. To prepare appropriate train corpus, LDA (Latent Dirichlet Allocation) is applied for theme extraction. Generated lyrics is grammatically correct and semantically coherent with selected themes, which offers a valuable modelling method in other fields including multi-turn chatbots, long paragraph generation and etc.
[ { "created": "Thu, 23 May 2019 08:50:15 GMT", "version": "v1" } ]
2019-06-06
[ [ "Wang", "Jie", "" ], [ "Zhao", "Xinyan", "" ] ]
With rapid development of neural networks, deep-learning has been extended to various natural language generation fields, such as machine translation, dialogue generation and even literature creation. In this paper, we propose a theme-aware language generation model for Chinese music lyrics, which improves the theme-connectivity and coherence of generated paragraphs greatly. A multi-channel sequence-to-sequence (seq2seq) model encodes themes and previous sentences as global and local contextual information. Moreover, attention mechanism is incorporated for sequence decoding, enabling to fuse context into predicted next texts. To prepare appropriate train corpus, LDA (Latent Dirichlet Allocation) is applied for theme extraction. Generated lyrics is grammatically correct and semantically coherent with selected themes, which offers a valuable modelling method in other fields including multi-turn chatbots, long paragraph generation and etc.
2301.03221
Arnaud de Mesmay
Eun Jung Kim, Arnaud de Mesmay, Tillmann Miltzow
Representing Matroids over the Reals is $\exists \mathbb R$-complete
v2 and v3: Minor changes v4: Final version, to appear in DMTCS
null
null
null
cs.CC math.CO
http://creativecommons.org/licenses/by/4.0/
A matroid $M$ is an ordered pair $(E,I)$, where $E$ is a finite set called the ground set and a collection $I\subset 2^{E}$ called the independent sets which satisfy the conditions: (i) $\emptyset \in I$, (ii) $I'\subset I \in I$ implies $I'\in I$, and (iii) $I_1,I_2 \in I$ and $|I_1| < |I_2|$ implies that there is an $e\in I_2$ such that $I_1\cup \{e\} \in I$. The rank $rank(M)$ of a matroid $M$ is the maximum size of an independent set. We say that a matroid $M=(E,I)$ is representable over the reals if there is a map $\varphi \colon E \rightarrow \mathbb{R}^{rank(M)}$ such that $I\in I$ if and only if $\varphi(I)$ forms a linearly independent set. We study the problem of matroid realizability over the reals. Given a matroid $M$, we ask whether there is a set of points in the Euclidean space representing $M$. We show that matroid realizability is $\exists \mathbb R$-complete, already for matroids of rank 3. The complexity class $\exists \mathbb R$ can be defined as the family of algorithmic problems that is polynomial-time is equivalent to determining if a multivariate polynomial with integers coefficients has a real root. Our methods are similar to previous methods from the literature. Yet, the result itself was never pointed out and there is no proof readily available in the language of computer science.
[ { "created": "Mon, 9 Jan 2023 09:33:50 GMT", "version": "v1" }, { "created": "Mon, 8 Jan 2024 16:49:45 GMT", "version": "v2" }, { "created": "Tue, 9 Jan 2024 11:13:41 GMT", "version": "v3" }, { "created": "Thu, 11 Jul 2024 14:35:55 GMT", "version": "v4" } ]
2024-07-12
[ [ "Kim", "Eun Jung", "" ], [ "de Mesmay", "Arnaud", "" ], [ "Miltzow", "Tillmann", "" ] ]
A matroid $M$ is an ordered pair $(E,I)$, where $E$ is a finite set called the ground set and a collection $I\subset 2^{E}$ called the independent sets which satisfy the conditions: (i) $\emptyset \in I$, (ii) $I'\subset I \in I$ implies $I'\in I$, and (iii) $I_1,I_2 \in I$ and $|I_1| < |I_2|$ implies that there is an $e\in I_2$ such that $I_1\cup \{e\} \in I$. The rank $rank(M)$ of a matroid $M$ is the maximum size of an independent set. We say that a matroid $M=(E,I)$ is representable over the reals if there is a map $\varphi \colon E \rightarrow \mathbb{R}^{rank(M)}$ such that $I\in I$ if and only if $\varphi(I)$ forms a linearly independent set. We study the problem of matroid realizability over the reals. Given a matroid $M$, we ask whether there is a set of points in the Euclidean space representing $M$. We show that matroid realizability is $\exists \mathbb R$-complete, already for matroids of rank 3. The complexity class $\exists \mathbb R$ can be defined as the family of algorithmic problems that is polynomial-time is equivalent to determining if a multivariate polynomial with integers coefficients has a real root. Our methods are similar to previous methods from the literature. Yet, the result itself was never pointed out and there is no proof readily available in the language of computer science.
1212.2036
Jiwei Li
Jiwei Li and Sujian Li
Query-focused Multi-document Summarization: Combining a Novel Topic Model with Graph-based Semi-supervised Learning
This paper has been withdrawn by the author due to a crucial sign error in equation
null
null
null
cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-based semi-supervised learning has proven to be an effective approach for query-focused multi-document summarization. The problem of previous semi-supervised learning is that sentences are ranked without considering the higher level information beyond sentence level. Researches on general summarization illustrated that the addition of topic level can effectively improve the summary quality. Inspired by previous researches, we propose a two-layer (i.e. sentence layer and topic layer) graph-based semi-supervised learning approach. At the same time, we propose a novel topic model which makes full use of the dependence between sentences and words. Experimental results on DUC and TAC data sets demonstrate the effectiveness of our proposed approach.
[ { "created": "Mon, 10 Dec 2012 11:35:29 GMT", "version": "v1" }, { "created": "Fri, 27 Dec 2013 17:24:00 GMT", "version": "v2" }, { "created": "Tue, 31 Dec 2013 17:13:33 GMT", "version": "v3" } ]
2014-01-03
[ [ "Li", "Jiwei", "" ], [ "Li", "Sujian", "" ] ]
Graph-based semi-supervised learning has proven to be an effective approach for query-focused multi-document summarization. The problem of previous semi-supervised learning is that sentences are ranked without considering the higher level information beyond sentence level. Researches on general summarization illustrated that the addition of topic level can effectively improve the summary quality. Inspired by previous researches, we propose a two-layer (i.e. sentence layer and topic layer) graph-based semi-supervised learning approach. At the same time, we propose a novel topic model which makes full use of the dependence between sentences and words. Experimental results on DUC and TAC data sets demonstrate the effectiveness of our proposed approach.
2207.09774
Edoardo Remelli
Edoardo Remelli, Timur Bagautdinov, Shunsuke Saito, Tomas Simon, Chenglei Wu, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason Saragih, Yaser Sheikh
Drivable Volumetric Avatars using Texel-Aligned Features
null
SIGGRAPH 2022 Conference Proceedings
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Photorealistic telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance that is indistinguishable from reality. In this work, we propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people. One challenge is driving an avatar while staying faithful to details and dynamics that cannot be captured by a global low-dimensional parameterization such as body pose. Our approach supports driving of clothed avatars with wrinkles and motion that a real driving performer exhibits beyond the training corpus. Unlike existing global state representations or non-parametric screen-space approaches, we introduce texel-aligned features -- a localised representation which can leverage both the structural prior of a skeleton-based parametric model and observed sparse image signals at the same time. Another challenge is modeling a temporally coherent clothed avatar, which typically requires precise surface tracking. To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects. By explicitly incorporating articulation, our approach naturally generalizes to unseen poses. We also introduce a localized viewpoint conditioning, which leads to a large improvement in generalization of view-dependent appearance. The proposed volumetric representation does not require high-quality mesh tracking as a prerequisite and brings significant quality improvements compared to mesh-based counterparts. In our experiments, we carefully examine our design choices and demonstrate the efficacy of our approach, outperforming the state-of-the-art methods on challenging driving scenarios.
[ { "created": "Wed, 20 Jul 2022 09:28:16 GMT", "version": "v1" } ]
2022-07-21
[ [ "Remelli", "Edoardo", "" ], [ "Bagautdinov", "Timur", "" ], [ "Saito", "Shunsuke", "" ], [ "Simon", "Tomas", "" ], [ "Wu", "Chenglei", "" ], [ "Wei", "Shih-En", "" ], [ "Guo", "Kaiwen", "" ], [ "Cao", "Zhe", "" ], [ "Prada", "Fabian", "" ], [ "Saragih", "Jason", "" ], [ "Sheikh", "Yaser", "" ] ]
Photorealistic telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance that is indistinguishable from reality. In this work, we propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people. One challenge is driving an avatar while staying faithful to details and dynamics that cannot be captured by a global low-dimensional parameterization such as body pose. Our approach supports driving of clothed avatars with wrinkles and motion that a real driving performer exhibits beyond the training corpus. Unlike existing global state representations or non-parametric screen-space approaches, we introduce texel-aligned features -- a localised representation which can leverage both the structural prior of a skeleton-based parametric model and observed sparse image signals at the same time. Another challenge is modeling a temporally coherent clothed avatar, which typically requires precise surface tracking. To circumvent this, we propose a novel volumetric avatar representation by extending mixtures of volumetric primitives to articulated objects. By explicitly incorporating articulation, our approach naturally generalizes to unseen poses. We also introduce a localized viewpoint conditioning, which leads to a large improvement in generalization of view-dependent appearance. The proposed volumetric representation does not require high-quality mesh tracking as a prerequisite and brings significant quality improvements compared to mesh-based counterparts. In our experiments, we carefully examine our design choices and demonstrate the efficacy of our approach, outperforming the state-of-the-art methods on challenging driving scenarios.
1809.06965
Seung Bin Baik
Seung Bin Baik, Keum Gang Cha
A Study on Deep Learning Based Sauvegrain Method for Measurement of Puberty Bone Age
5 pages, 6 figures, 1 table
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study applies a technique to expand the number of images to a level that allows deep learning. And the applicability of the Sauvegrain method through deep learning with relatively few elbow X-rays is studied. The study was composed of processes similar to the physicians' bone age assessment procedures. The selected reference images were learned without being included in the evaluation data, and at the same time, the data was extended to accommodate the number of cases. In addition, we adjusted the X-ray images to better images using U-Net and selected the ROI with RPN + so as to be able to perform bone age estimation through CNN. The mean absolute error of the Sauvegrain method based on deep learning is 2.8 months and the Mean Absolute Percentage Error (MAPE) is 0.018. This result shows that X - ray analysis using the Sauvegrain method shows higher accuracy than that of the age group of puberty even in the deep learning base. This means that deep learning of the Suvegrain method can be measured at a level similar to that of an expert, based on the extended X-ray image with the image data extension technique. Finally, we applied the Sauvegrain method to deep learning for accurate measurement of bone age at puberty. As a result, the present study is based on deep learning, and compared with the evaluation results of experts, it is possible to overcome limitations of the method of measuring bone age based on machine learning which was in TW3 or Greulich & Pyle due to lack of X- I confirmed the fact. And we also presented the Sauvegrain method, which is applicable to adolescents as well.
[ { "created": "Tue, 18 Sep 2018 23:47:08 GMT", "version": "v1" } ]
2018-09-20
[ [ "Baik", "Seung Bin", "" ], [ "Cha", "Keum Gang", "" ] ]
This study applies a technique to expand the number of images to a level that allows deep learning. And the applicability of the Sauvegrain method through deep learning with relatively few elbow X-rays is studied. The study was composed of processes similar to the physicians' bone age assessment procedures. The selected reference images were learned without being included in the evaluation data, and at the same time, the data was extended to accommodate the number of cases. In addition, we adjusted the X-ray images to better images using U-Net and selected the ROI with RPN + so as to be able to perform bone age estimation through CNN. The mean absolute error of the Sauvegrain method based on deep learning is 2.8 months and the Mean Absolute Percentage Error (MAPE) is 0.018. This result shows that X - ray analysis using the Sauvegrain method shows higher accuracy than that of the age group of puberty even in the deep learning base. This means that deep learning of the Suvegrain method can be measured at a level similar to that of an expert, based on the extended X-ray image with the image data extension technique. Finally, we applied the Sauvegrain method to deep learning for accurate measurement of bone age at puberty. As a result, the present study is based on deep learning, and compared with the evaluation results of experts, it is possible to overcome limitations of the method of measuring bone age based on machine learning which was in TW3 or Greulich & Pyle due to lack of X- I confirmed the fact. And we also presented the Sauvegrain method, which is applicable to adolescents as well.
2404.02608
Jeferson Gonzalez-Gomez
Jeferson Gonzalez-Gomez, Hassan Nassar, Lars Bauer and Jorg Henkel
LightFAt: Mitigating Control-flow Explosion via Lightweight PMU-based Control-flow Attestation
This official version of this paper will appear in the 2024 IEEE International Symposium on Hardware Oriented Security and Trust (HOST)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the continuous evolution of computational devices, more and more applications are being executed remotely. The applications operate on a wide spectrum of devices, ranging from IoT nodes with low computational capabilities to large cloud providers with high capabilities. Remote execution often deals with sensitive data or executes proprietary software. Hence, the challenge of ensuring that the code execution will not be compromised rises. Remote Attestation deals with this challenge. It ensures the code is executed in a non-compromised environment by calculating a potentially large sequence of cryptographic hash values. Each hash calculation is computationally intensive and over a large sequence the overhead becomes extremely high. In this work, we propose LightFAt: a Lightweight Control Flow Attestation scheme. Instead of relying on the expensive cryptographic hash calculation, LightFAt leverages the readings from the processor's Performance Monitor Unit (PMU) in conjunction with a lightweight unsupervised machine learning (ML) classifier to detect whether a target application's control flow is compromised, hence improving the system's security. On the verifier's side, LightFAt reaches a detection accuracy of over 95%, with low false-negative and false-positive rates.
[ { "created": "Wed, 3 Apr 2024 09:55:15 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2024 09:20:33 GMT", "version": "v2" } ]
2024-04-05
[ [ "Gonzalez-Gomez", "Jeferson", "" ], [ "Nassar", "Hassan", "" ], [ "Bauer", "Lars", "" ], [ "Henkel", "Jorg", "" ] ]
With the continuous evolution of computational devices, more and more applications are being executed remotely. The applications operate on a wide spectrum of devices, ranging from IoT nodes with low computational capabilities to large cloud providers with high capabilities. Remote execution often deals with sensitive data or executes proprietary software. Hence, the challenge of ensuring that the code execution will not be compromised rises. Remote Attestation deals with this challenge. It ensures the code is executed in a non-compromised environment by calculating a potentially large sequence of cryptographic hash values. Each hash calculation is computationally intensive and over a large sequence the overhead becomes extremely high. In this work, we propose LightFAt: a Lightweight Control Flow Attestation scheme. Instead of relying on the expensive cryptographic hash calculation, LightFAt leverages the readings from the processor's Performance Monitor Unit (PMU) in conjunction with a lightweight unsupervised machine learning (ML) classifier to detect whether a target application's control flow is compromised, hence improving the system's security. On the verifier's side, LightFAt reaches a detection accuracy of over 95%, with low false-negative and false-positive rates.
2103.02654
Yudi Dong
Yudi Dong and Huaxia Wang and Yu-Dong Yao
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks
5 pages letter
ICC 2022 - IEEE International Conference on Communications
10.1109/ICC45855.2022.9838452
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
We propose a novel defensive mechanism based on a generative adversarial network (GAN) framework to defend against adversarial attacks in end-to-end communications systems. Specifically, we utilize a generative network to model a powerful adversary and enable the end-to-end communications system to combat the generative attack network via a minimax game. We show that the proposed system not only works well against white-box and black-box adversarial attacks but also possesses excellent generalization capabilities to maintain good performance under no attacks. We also show that our GAN-based end-to-end system outperforms the conventional communications system and the end-to-end communications system with/without adversarial training.
[ { "created": "Wed, 3 Mar 2021 20:04:42 GMT", "version": "v1" } ]
2022-08-16
[ [ "Dong", "Yudi", "" ], [ "Wang", "Huaxia", "" ], [ "Yao", "Yu-Dong", "" ] ]
We propose a novel defensive mechanism based on a generative adversarial network (GAN) framework to defend against adversarial attacks in end-to-end communications systems. Specifically, we utilize a generative network to model a powerful adversary and enable the end-to-end communications system to combat the generative attack network via a minimax game. We show that the proposed system not only works well against white-box and black-box adversarial attacks but also possesses excellent generalization capabilities to maintain good performance under no attacks. We also show that our GAN-based end-to-end system outperforms the conventional communications system and the end-to-end communications system with/without adversarial training.
2311.14534
Ali Ismail-Fawaz
Ali Ismail-Fawaz, Maxime Devanne, Stefano Berretti, Jonathan Weber, Germain Forestier
Finding Foundation Models for Time Series Classification with a PreText Task
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Over the past decade, Time Series Classification (TSC) has gained an increasing attention. While various methods were explored, deep learning - particularly through Convolutional Neural Networks (CNNs)-stands out as an effective approach. However, due to the limited availability of training data, defining a foundation model for TSC that overcomes the overfitting problem is still a challenging task. The UCR archive, encompassing a wide spectrum of datasets ranging from motion recognition to ECG-based heart disease detection, serves as a prime example for exploring this issue in diverse TSC scenarios. In this paper, we address the overfitting challenge by introducing pre-trained domain foundation models. A key aspect of our methodology is a novel pretext task that spans multiple datasets. This task is designed to identify the originating dataset of each time series sample, with the goal of creating flexible convolution filters that can be applied across different datasets. The research process consists of two phases: a pre-training phase where the model acquires general features through the pretext task, and a subsequent fine-tuning phase for specific dataset classifications. Our extensive experiments on the UCR archive demonstrate that this pre-training strategy significantly outperforms the conventional training approach without pre-training. This strategy effectively reduces overfitting in small datasets and provides an efficient route for adapting these models to new datasets, thus advancing the capabilities of deep learning in TSC.
[ { "created": "Fri, 24 Nov 2023 15:03:55 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 13:58:20 GMT", "version": "v2" } ]
2024-02-29
[ [ "Ismail-Fawaz", "Ali", "" ], [ "Devanne", "Maxime", "" ], [ "Berretti", "Stefano", "" ], [ "Weber", "Jonathan", "" ], [ "Forestier", "Germain", "" ] ]
Over the past decade, Time Series Classification (TSC) has gained an increasing attention. While various methods were explored, deep learning - particularly through Convolutional Neural Networks (CNNs)-stands out as an effective approach. However, due to the limited availability of training data, defining a foundation model for TSC that overcomes the overfitting problem is still a challenging task. The UCR archive, encompassing a wide spectrum of datasets ranging from motion recognition to ECG-based heart disease detection, serves as a prime example for exploring this issue in diverse TSC scenarios. In this paper, we address the overfitting challenge by introducing pre-trained domain foundation models. A key aspect of our methodology is a novel pretext task that spans multiple datasets. This task is designed to identify the originating dataset of each time series sample, with the goal of creating flexible convolution filters that can be applied across different datasets. The research process consists of two phases: a pre-training phase where the model acquires general features through the pretext task, and a subsequent fine-tuning phase for specific dataset classifications. Our extensive experiments on the UCR archive demonstrate that this pre-training strategy significantly outperforms the conventional training approach without pre-training. This strategy effectively reduces overfitting in small datasets and provides an efficient route for adapting these models to new datasets, thus advancing the capabilities of deep learning in TSC.
2306.13064
Kate Boxer
Kate S. Boxer, Edward McFowland III, Daniel B. Neill
Auditing Predictive Models for Intersectional Biases
29 pages, 7 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), a flexible auditing framework for detecting intersectional biases in classification models. CBS identifies the subgroup for which there is the most significant bias against the protected class, as compared to the equivalent subgroup in the non-protected class, and can incorporate multiple commonly used fairness definitions for both probabilistic and binarized predictions. We show that this methodology can detect previously unidentified intersectional and contextual biases in the COMPAS pre-trial risk assessment tool and has higher bias detection power compared to similar methods that audit for subgroup fairness.
[ { "created": "Thu, 22 Jun 2023 17:32:12 GMT", "version": "v1" } ]
2023-06-23
[ [ "Boxer", "Kate S.", "" ], [ "McFowland", "Edward", "III" ], [ "Neill", "Daniel B.", "" ] ]
Predictive models that satisfy group fairness criteria in aggregate for members of a protected class, but do not guarantee subgroup fairness, could produce biased predictions for individuals at the intersection of two or more protected classes. To address this risk, we propose Conditional Bias Scan (CBS), a flexible auditing framework for detecting intersectional biases in classification models. CBS identifies the subgroup for which there is the most significant bias against the protected class, as compared to the equivalent subgroup in the non-protected class, and can incorporate multiple commonly used fairness definitions for both probabilistic and binarized predictions. We show that this methodology can detect previously unidentified intersectional and contextual biases in the COMPAS pre-trial risk assessment tool and has higher bias detection power compared to similar methods that audit for subgroup fairness.
2310.19182
Junjiao Tian
Junjiao Tian, Yen-Cheng Liu, James Seale Smith, Zsolt Kira
Fast Trainable Projection for Robust Fine-Tuning
Accepted to NeurIPS 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while maintaining the out-of-distribution (OOD) robustness of a pre-trained model when transferring it to a downstream task. Recently, projected gradient descent has been successfully used in robust fine-tuning by constraining the deviation from the initialization of the fine-tuned model explicitly through projection. However, algorithmically, two limitations prevent this method from being adopted more widely, scalability and efficiency. In this paper, we propose a new projection-based fine-tuning algorithm, Fast Trainable Projection (FTP) for computationally efficient learning of per-layer projection constraints, resulting in an average $35\%$ speedup on our benchmarks compared to prior works. FTP can be combined with existing optimizers such as AdamW, and be used in a plug-and-play fashion. Finally, we show that FTP is a special instance of hyper-optimizers that tune the hyper-parameters of optimizers in a learnable manner through nested differentiation. Empirically, we show superior robustness on OOD datasets, including domain shifts and natural corruptions, across four different vision tasks with five different pre-trained models. Additionally, we demonstrate that FTP is broadly applicable and beneficial to other learning scenarios such as low-label and continual learning settings thanks to its easy adaptability. The code will be available at https://github.com/GT-RIPL/FTP.git.
[ { "created": "Sun, 29 Oct 2023 22:52:43 GMT", "version": "v1" } ]
2023-10-31
[ [ "Tian", "Junjiao", "" ], [ "Liu", "Yen-Cheng", "" ], [ "Smith", "James Seale", "" ], [ "Kira", "Zsolt", "" ] ]
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while maintaining the out-of-distribution (OOD) robustness of a pre-trained model when transferring it to a downstream task. Recently, projected gradient descent has been successfully used in robust fine-tuning by constraining the deviation from the initialization of the fine-tuned model explicitly through projection. However, algorithmically, two limitations prevent this method from being adopted more widely, scalability and efficiency. In this paper, we propose a new projection-based fine-tuning algorithm, Fast Trainable Projection (FTP) for computationally efficient learning of per-layer projection constraints, resulting in an average $35\%$ speedup on our benchmarks compared to prior works. FTP can be combined with existing optimizers such as AdamW, and be used in a plug-and-play fashion. Finally, we show that FTP is a special instance of hyper-optimizers that tune the hyper-parameters of optimizers in a learnable manner through nested differentiation. Empirically, we show superior robustness on OOD datasets, including domain shifts and natural corruptions, across four different vision tasks with five different pre-trained models. Additionally, we demonstrate that FTP is broadly applicable and beneficial to other learning scenarios such as low-label and continual learning settings thanks to its easy adaptability. The code will be available at https://github.com/GT-RIPL/FTP.git.
2205.12428
Ivan Kobyzev
Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Pascal Poupart, Ali Ghodsi
Do we need Label Regularization to Fine-tune Pre-trained Language Models?
Published at EACL 2023
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model. Considering the ever-growing size of pre-trained language models (PLMs), KD is often adopted in many NLP tasks involving PLMs. However, it is evident that in KD, deploying the teacher network during training adds to the memory and computational requirements of training. In the computer vision literature, the necessity of the teacher network is put under scrutiny by showing that KD is a label regularization technique that can be replaced with lighter teacher-free variants such as the label-smoothing technique. However, to the best of our knowledge, this issue is not investigated in NLP. Therefore, this work concerns studying different label regularization techniques and whether we actually need them to improve the fine-tuning of smaller PLM networks on downstream tasks. In this regard, we did a comprehensive set of experiments on different PLMs such as BERT, RoBERTa, and GPT with more than 600 distinct trials and ran each configuration five times. This investigation led to a surprising observation that KD and other label regularization techniques do not play any meaningful role over regular fine-tuning when the student model is pre-trained. We further explore this phenomenon in different settings of NLP and computer vision tasks and demonstrate that pre-training itself acts as a kind of regularization, and additional label regularization is unnecessary.
[ { "created": "Wed, 25 May 2022 01:26:31 GMT", "version": "v1" }, { "created": "Wed, 12 Apr 2023 15:34:03 GMT", "version": "v2" } ]
2023-04-13
[ [ "Kobyzev", "Ivan", "" ], [ "Jafari", "Aref", "" ], [ "Rezagholizadeh", "Mehdi", "" ], [ "Li", "Tianda", "" ], [ "Do-Omri", "Alan", "" ], [ "Lu", "Peng", "" ], [ "Poupart", "Pascal", "" ], [ "Ghodsi", "Ali", "" ] ]
Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model. Considering the ever-growing size of pre-trained language models (PLMs), KD is often adopted in many NLP tasks involving PLMs. However, it is evident that in KD, deploying the teacher network during training adds to the memory and computational requirements of training. In the computer vision literature, the necessity of the teacher network is put under scrutiny by showing that KD is a label regularization technique that can be replaced with lighter teacher-free variants such as the label-smoothing technique. However, to the best of our knowledge, this issue is not investigated in NLP. Therefore, this work concerns studying different label regularization techniques and whether we actually need them to improve the fine-tuning of smaller PLM networks on downstream tasks. In this regard, we did a comprehensive set of experiments on different PLMs such as BERT, RoBERTa, and GPT with more than 600 distinct trials and ran each configuration five times. This investigation led to a surprising observation that KD and other label regularization techniques do not play any meaningful role over regular fine-tuning when the student model is pre-trained. We further explore this phenomenon in different settings of NLP and computer vision tasks and demonstrate that pre-training itself acts as a kind of regularization, and additional label regularization is unnecessary.
2112.11117
Konrad Kollnig
Konrad Kollnig, Reuben Binns, Max Van Kleek, Ulrik Lyngs, Jun Zhao, Claudine Tinsman, Nigel Shadbolt
Before and after GDPR: tracking in mobile apps
null
Internet Policy Review, 2021, 10(4)
10.14763/2021.4.1611
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Third-party tracking, the collection and sharing of behavioural data about individuals, is a significant and ubiquitous privacy threat in mobile apps. The EU General Data Protection Regulation (GDPR) was introduced in 2018 to protect personal data better, but there exists, thus far, limited empirical evidence about its efficacy. This paper studies tracking in nearly two million Android apps from before and after the introduction of the GDPR. Our analysis suggests that there has been limited change in the presence of third-party tracking in apps, and that the concentration of tracking capabilities among a few large gatekeeper companies persists. However, change might be imminent.
[ { "created": "Tue, 21 Dec 2021 11:45:01 GMT", "version": "v1" } ]
2021-12-22
[ [ "Kollnig", "Konrad", "" ], [ "Binns", "Reuben", "" ], [ "Van Kleek", "Max", "" ], [ "Lyngs", "Ulrik", "" ], [ "Zhao", "Jun", "" ], [ "Tinsman", "Claudine", "" ], [ "Shadbolt", "Nigel", "" ] ]
Third-party tracking, the collection and sharing of behavioural data about individuals, is a significant and ubiquitous privacy threat in mobile apps. The EU General Data Protection Regulation (GDPR) was introduced in 2018 to protect personal data better, but there exists, thus far, limited empirical evidence about its efficacy. This paper studies tracking in nearly two million Android apps from before and after the introduction of the GDPR. Our analysis suggests that there has been limited change in the presence of third-party tracking in apps, and that the concentration of tracking capabilities among a few large gatekeeper companies persists. However, change might be imminent.
2202.05413
Yun-Hsin Kuo
Yun-Hsin Kuo, Takanori Fujiwara, Charles C.-K. Chou, Chun-houh Chen, Kwan-Liu Ma
A Machine-Learning-Aided Visual Analysis Workflow for Investigating Air Pollution Data
To appear in the Proceedings of IEEE PacificVis 2022
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing air pollution data is challenging as there are various analysis focuses from different aspects: feature (what), space (where), and time (when). As in most geospatial analysis problems, besides high-dimensional features, the temporal and spatial dependencies of air pollution induce the complexity of performing analysis. Machine learning methods, such as dimensionality reduction, can extract and summarize important information of the data to lift the burden of understanding such a complicated environment. In this paper, we present a methodology that utilizes multiple machine learning methods to uniformly explore these aspects. With this methodology, we develop a visual analytic system that supports a flexible analysis workflow, allowing domain experts to freely explore different aspects based on their analysis needs. We demonstrate the capability of our system and analysis workflow supporting a variety of analysis tasks with multiple use cases.
[ { "created": "Fri, 11 Feb 2022 02:24:21 GMT", "version": "v1" } ]
2022-02-14
[ [ "Kuo", "Yun-Hsin", "" ], [ "Fujiwara", "Takanori", "" ], [ "Chou", "Charles C. -K.", "" ], [ "Chen", "Chun-houh", "" ], [ "Ma", "Kwan-Liu", "" ] ]
Analyzing air pollution data is challenging as there are various analysis focuses from different aspects: feature (what), space (where), and time (when). As in most geospatial analysis problems, besides high-dimensional features, the temporal and spatial dependencies of air pollution induce the complexity of performing analysis. Machine learning methods, such as dimensionality reduction, can extract and summarize important information of the data to lift the burden of understanding such a complicated environment. In this paper, we present a methodology that utilizes multiple machine learning methods to uniformly explore these aspects. With this methodology, we develop a visual analytic system that supports a flexible analysis workflow, allowing domain experts to freely explore different aspects based on their analysis needs. We demonstrate the capability of our system and analysis workflow supporting a variety of analysis tasks with multiple use cases.
2208.10781
Jongha Kim
Jongha Kim, Jinheon Baek, Sung Ju Hwang
Object Detection in Aerial Images with Uncertainty-Aware Graph Network
ECCV Workshop 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we propose a novel uncertainty-aware object detection framework with a structured-graph, where nodes and edges are denoted by objects and their spatial-semantic similarities, respectively. Specifically, we aim to consider relationships among objects for effectively contextualizing them. To achieve this, we first detect objects and then measure their semantic and spatial distances to construct an object graph, which is then represented by a graph neural network (GNN) for refining visual CNN features for objects. However, refining CNN features and detection results of every object are inefficient and may not be necessary, as that include correct predictions with low uncertainties. Therefore, we propose to handle uncertain objects by not only transferring the representation from certain objects (sources) to uncertain objects (targets) over the directed graph, but also improving CNN features only on objects regarded as uncertain with their representational outputs from the GNN. Furthermore, we calculate a training loss by giving larger weights on uncertain objects, to concentrate on improving uncertain object predictions while maintaining high performances on certain objects. We refer to our model as Uncertainty-Aware Graph network for object DETection (UAGDet). We then experimentally validate ours on the challenging large-scale aerial image dataset, namely DOTA, that consists of lots of objects with small to large sizes in an image, on which ours improves the performance of the existing object detection network.
[ { "created": "Tue, 23 Aug 2022 07:29:03 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2022 05:45:37 GMT", "version": "v2" } ]
2022-08-25
[ [ "Kim", "Jongha", "" ], [ "Baek", "Jinheon", "" ], [ "Hwang", "Sung Ju", "" ] ]
In this work, we propose a novel uncertainty-aware object detection framework with a structured-graph, where nodes and edges are denoted by objects and their spatial-semantic similarities, respectively. Specifically, we aim to consider relationships among objects for effectively contextualizing them. To achieve this, we first detect objects and then measure their semantic and spatial distances to construct an object graph, which is then represented by a graph neural network (GNN) for refining visual CNN features for objects. However, refining CNN features and detection results of every object are inefficient and may not be necessary, as that include correct predictions with low uncertainties. Therefore, we propose to handle uncertain objects by not only transferring the representation from certain objects (sources) to uncertain objects (targets) over the directed graph, but also improving CNN features only on objects regarded as uncertain with their representational outputs from the GNN. Furthermore, we calculate a training loss by giving larger weights on uncertain objects, to concentrate on improving uncertain object predictions while maintaining high performances on certain objects. We refer to our model as Uncertainty-Aware Graph network for object DETection (UAGDet). We then experimentally validate ours on the challenging large-scale aerial image dataset, namely DOTA, that consists of lots of objects with small to large sizes in an image, on which ours improves the performance of the existing object detection network.
2308.13663
Paula Mercurio
Paula Mercurio and Di Liu
Network Embedding Using Sparse Approximations of Random Walks
20 pages, 4 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose an efficient numerical implementation of Network Embedding based on commute times, using sparse approximation of a diffusion process on the network obtained by a modified version of the diffusion wavelet algorithm. The node embeddings are computed by optimizing the cross entropy loss via the stochastic gradient descent method with sampling of low-dimensional representations of green functions. We demonstrate the efficacy of this method for data clustering and multi-label classification through several examples, and compare its performance over existing methods in terms of efficiency and accuracy. Theoretical issues justifying the scheme are also discussed.
[ { "created": "Fri, 25 Aug 2023 20:35:45 GMT", "version": "v1" } ]
2023-08-29
[ [ "Mercurio", "Paula", "" ], [ "Liu", "Di", "" ] ]
In this paper, we propose an efficient numerical implementation of Network Embedding based on commute times, using sparse approximation of a diffusion process on the network obtained by a modified version of the diffusion wavelet algorithm. The node embeddings are computed by optimizing the cross entropy loss via the stochastic gradient descent method with sampling of low-dimensional representations of green functions. We demonstrate the efficacy of this method for data clustering and multi-label classification through several examples, and compare its performance over existing methods in terms of efficiency and accuracy. Theoretical issues justifying the scheme are also discussed.
1812.10668
\'Alvaro L\'opez Garc\'ia
\'Alvaro L\'opez Garc\'ia, Enol Fern\'andez-del-Castillo, Isabel Campos Plasencia
An efficient cloud scheduler design supporting preemptible instances
null
Future Generation Computer Systems (2019)
10.1016/j.future.2018.12.057
null
cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performace overhead of the proposed scheduler agains existing solutions.
[ { "created": "Thu, 27 Dec 2018 09:14:37 GMT", "version": "v1" }, { "created": "Thu, 3 Jan 2019 10:10:37 GMT", "version": "v2" }, { "created": "Tue, 28 Jan 2020 08:42:41 GMT", "version": "v3" } ]
2020-01-29
[ [ "García", "Álvaro López", "" ], [ "Fernández-del-Castillo", "Enol", "" ], [ "Plasencia", "Isabel Campos", "" ] ]
Maximizing resource utilization by performing an efficient resource provisioning is a key factor for any cloud provider: commercial actors can maximize their revenues, whereas scientific and non-commercial providers can maximize their infrastructure utilization. Traditionally, batch systems have allowed data centers to fill their resources as much as possible by using backfilling and similar techniques. However, in an IaaS cloud, where virtual machines are supposed to live indefinitely, or at least as long as the user is able to pay for them, these policies are not easily implementable. In this work we present a new scheduling algorithm for IaaS providers that is able to support preemptible instances, that can be stopped by higher priority requests without introducing large modifications in the current cloud schedulers. This scheduler enables the implementation of new cloud usage and payment models that allow more efficient usage of the resources and potential new revenue sources for commercial providers. We also study the correctness and the performace overhead of the proposed scheduler agains existing solutions.
2204.01807
Scott Workman
Scott Workman, M. Usman Rafique, Hunter Blanton, Nathan Jacobs
Revisiting Near/Remote Sensing with Geospatial Attention
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses the task of overhead image segmentation when auxiliary ground-level images are available. Recent work has shown that performing joint inference over these two modalities, often called near/remote sensing, can yield significant accuracy improvements. Extending this line of work, we introduce the concept of geospatial attention, a geometry-aware attention mechanism that explicitly considers the geospatial relationship between the pixels in a ground-level image and a geographic location. We propose an approach for computing geospatial attention that incorporates geometric features and the appearance of the overhead and ground-level imagery. We introduce a novel architecture for near/remote sensing that is based on geospatial attention and demonstrate its use for five segmentation tasks. The results demonstrate that our method significantly outperforms the previous state-of-the-art methods.
[ { "created": "Mon, 4 Apr 2022 19:19:50 GMT", "version": "v1" } ]
2022-04-06
[ [ "Workman", "Scott", "" ], [ "Rafique", "M. Usman", "" ], [ "Blanton", "Hunter", "" ], [ "Jacobs", "Nathan", "" ] ]
This work addresses the task of overhead image segmentation when auxiliary ground-level images are available. Recent work has shown that performing joint inference over these two modalities, often called near/remote sensing, can yield significant accuracy improvements. Extending this line of work, we introduce the concept of geospatial attention, a geometry-aware attention mechanism that explicitly considers the geospatial relationship between the pixels in a ground-level image and a geographic location. We propose an approach for computing geospatial attention that incorporates geometric features and the appearance of the overhead and ground-level imagery. We introduce a novel architecture for near/remote sensing that is based on geospatial attention and demonstrate its use for five segmentation tasks. The results demonstrate that our method significantly outperforms the previous state-of-the-art methods.
1603.00816
Kezhi Li
Kezhi Li, Daniel Holland
A Nonlinear Weighted Total Variation Image Reconstruction Algorithm for Electrical Capacitance Tomography
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new iterative image reconstruction algorithm for electrical capacitance tomography (ECT) is proposed that is based on iterative soft thresholding of a total variation penalty and adaptive reweighted compressive sensing. This algorithm encourages sharp changes in the ECT image and overcomes the disadvantage of the $l_1$ minimization by equipping the total variation with an adaptive weighting depending on the reconstructed image. Moreover, the non-linear effect is also partially reduced due to the adoption of an updated sensitivity matrix. Simulation results show that the proposed algorithm recovers ECT images more precisely than existing state-of-the-art algorithms and therefore is suitable for the imaging of multiphase systems in industrial or medical applications.
[ { "created": "Wed, 2 Mar 2016 18:24:32 GMT", "version": "v1" }, { "created": "Mon, 21 Nov 2016 15:01:41 GMT", "version": "v2" } ]
2016-11-22
[ [ "Li", "Kezhi", "" ], [ "Holland", "Daniel", "" ] ]
A new iterative image reconstruction algorithm for electrical capacitance tomography (ECT) is proposed that is based on iterative soft thresholding of a total variation penalty and adaptive reweighted compressive sensing. This algorithm encourages sharp changes in the ECT image and overcomes the disadvantage of the $l_1$ minimization by equipping the total variation with an adaptive weighting depending on the reconstructed image. Moreover, the non-linear effect is also partially reduced due to the adoption of an updated sensitivity matrix. Simulation results show that the proposed algorithm recovers ECT images more precisely than existing state-of-the-art algorithms and therefore is suitable for the imaging of multiphase systems in industrial or medical applications.
2010.01478
Shruthi Chari
Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness
Explanation Ontology in Action: A Clinical Use-Case
5 pages, 2 figures, 1 protocol
International Semantic Web Conference, Poster and Demo Track, 2020
null
null
cs.AI cs.HC cs.LG
http://creativecommons.org/licenses/by/4.0/
We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting.
[ { "created": "Sun, 4 Oct 2020 03:52:39 GMT", "version": "v1" } ]
2020-10-06
[ [ "Chari", "Shruthi", "" ], [ "Seneviratne", "Oshani", "" ], [ "Gruen", "Daniel M.", "" ], [ "Foreman", "Morgan A.", "" ], [ "Das", "Amar K.", "" ], [ "McGuinness", "Deborah L.", "" ] ]
We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting.
2306.09561
Renan Leandro Fernandes
Renan Fernandes (1), Fred Freitas (1), Ivan Varzinczak (2, 3 and 4) and Pedro PM Farias (1 and 5) ((1) Centro de Inform\'atica - Universidade Federal de Pernambuco, (2) LIASD - Universit\'e Paris 8, (3) CAIR - University of Cape Town, (4) ISTI - CNR and (5) ARCE, Public Services Regulation Agency-CE)
A connection method for a defeasible extension of $\mathcal{ALCH}$
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes a connection method \`a la Bibel for an exception-tolerant family of description logics (DLs). As for the language, we assume the DL $\mathcal{ALCH}$ extended with two typicality operators: one on (complex) concepts and one on role names. The language is a variant of defeasible DLs, as broadly studied in the literature over the past decade, in which most of these can be embedded. We revisit the definition of the matrix representation of a knowledge base and establish the conditions for a given axiom to be provable. We show that the calculus terminates and is sound and complete w.r.t. a DL version of the preferential semantics widely adopted in non-monotonic reasoning.
[ { "created": "Fri, 16 Jun 2023 00:32:14 GMT", "version": "v1" }, { "created": "Thu, 22 Jun 2023 13:38:48 GMT", "version": "v2" } ]
2023-06-23
[ [ "Fernandes", "Renan", "", "2, 3 and 4" ], [ "Freitas", "Fred", "", "2, 3 and 4" ], [ "Varzinczak", "Ivan", "", "2, 3 and 4" ], [ "Farias", "Pedro PM", "", "1 and 5" ] ]
This paper proposes a connection method \`a la Bibel for an exception-tolerant family of description logics (DLs). As for the language, we assume the DL $\mathcal{ALCH}$ extended with two typicality operators: one on (complex) concepts and one on role names. The language is a variant of defeasible DLs, as broadly studied in the literature over the past decade, in which most of these can be embedded. We revisit the definition of the matrix representation of a knowledge base and establish the conditions for a given axiom to be provable. We show that the calculus terminates and is sound and complete w.r.t. a DL version of the preferential semantics widely adopted in non-monotonic reasoning.
2305.07710
Anubhav Jain
Anubhav Jain, Nasir Memon, Julian Togelius
Zero-shot racially balanced dataset generation using an existing biased StyleGAN2
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Facial recognition systems have made significant strides thanks to data-heavy deep learning models, but these models rely on large privacy-sensitive datasets. Further, many of these datasets lack diversity in terms of ethnicity and demographics, which can lead to biased models that can have serious societal and security implications. To address these issues, we propose a methodology that leverages the biased generative model StyleGAN2 to create demographically diverse images of synthetic individuals. The synthetic dataset is created using a novel evolutionary search algorithm that targets specific demographic groups. By training face recognition models with the resulting balanced dataset containing 50,000 identities per race (13.5 million images in total), we can improve their performance and minimize biases that might have been present in a model trained on a real dataset.
[ { "created": "Fri, 12 May 2023 18:07:10 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2023 17:48:17 GMT", "version": "v2" } ]
2023-09-20
[ [ "Jain", "Anubhav", "" ], [ "Memon", "Nasir", "" ], [ "Togelius", "Julian", "" ] ]
Facial recognition systems have made significant strides thanks to data-heavy deep learning models, but these models rely on large privacy-sensitive datasets. Further, many of these datasets lack diversity in terms of ethnicity and demographics, which can lead to biased models that can have serious societal and security implications. To address these issues, we propose a methodology that leverages the biased generative model StyleGAN2 to create demographically diverse images of synthetic individuals. The synthetic dataset is created using a novel evolutionary search algorithm that targets specific demographic groups. By training face recognition models with the resulting balanced dataset containing 50,000 identities per race (13.5 million images in total), we can improve their performance and minimize biases that might have been present in a model trained on a real dataset.
0904.1729
Sugumar Murugesan
Sugumar Murugesan, Philip Schniter
Joint Opportunistic Scheduling in Multi-Cellular Systems
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of multiuser scheduling with partial channel information in a multi-cell environment. The scheduling problem is formulated jointly with the ARQ based channel learning process and the intercell interference mitigating cell breathing protocol. The optimal joint scheduling policy under various system constraints is established. The general problem is posed as a generalized Restless Multiarmed Bandit process and the notion of indexability is studied. We conjecture, with numerical support, that the multicell multiuser scheduling problem is indexable and obtain a partial structure of the index policy.
[ { "created": "Fri, 10 Apr 2009 18:55:12 GMT", "version": "v1" } ]
2009-04-13
[ [ "Murugesan", "Sugumar", "" ], [ "Schniter", "Philip", "" ] ]
We address the problem of multiuser scheduling with partial channel information in a multi-cell environment. The scheduling problem is formulated jointly with the ARQ based channel learning process and the intercell interference mitigating cell breathing protocol. The optimal joint scheduling policy under various system constraints is established. The general problem is posed as a generalized Restless Multiarmed Bandit process and the notion of indexability is studied. We conjecture, with numerical support, that the multicell multiuser scheduling problem is indexable and obtain a partial structure of the index policy.
2111.05505
Zhikun Chen
Zhikun Chen, Daofeng Li, Jinkang Zhu and Sihai Zhang
DACFL: Dynamic Average Consensus Based Federated Learning in Decentralized Topology
null
Sensors 2022, 22(9), 3317
10.3390/s22093317
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Federated learning (FL) is a burgeoning distributed machine learning framework where a central parameter server (PS) coordinates many local users to train a globally consistent model. Conventional federated learning inevitably relies on a centralized topology with a PS. As a result, it will paralyze once the PS fails. To alleviate such a single point failure, especially on the PS, some existing work has provided decentralized FL (DFL) implementations like CDSGD and D-PSGD to facilitate FL in a decentralized topology. However, there are still some problems with these methods, e.g., significant divergence between users' final models in CDSGD and a network-wide model average necessity in D-PSGD. In order to solve these deficiency, this paper devises a new DFL implementation coined as DACFL, where each user trains its model using its own training data and exchanges the intermediate models with its neighbors through a symmetric and doubly stochastic matrix. The DACFL treats the progress of each user's local training as a discrete-time process and employs a first order dynamic average consensus (FODAC) method to track the \textit{average model} in the absence of the PS. In this paper, we also provide a theoretical convergence analysis of DACFL on the premise of i.i.d data to strengthen its rationality. The experimental results on MNIST, Fashion-MNIST and CIFAR-10 validate the feasibility of our solution in both time-invariant and time-varying network topologies, and declare that DACFL outperforms D-PSGD and CDSGD in most cases.
[ { "created": "Wed, 10 Nov 2021 03:00:40 GMT", "version": "v1" } ]
2023-12-13
[ [ "Chen", "Zhikun", "" ], [ "Li", "Daofeng", "" ], [ "Zhu", "Jinkang", "" ], [ "Zhang", "Sihai", "" ] ]
Federated learning (FL) is a burgeoning distributed machine learning framework where a central parameter server (PS) coordinates many local users to train a globally consistent model. Conventional federated learning inevitably relies on a centralized topology with a PS. As a result, it will paralyze once the PS fails. To alleviate such a single point failure, especially on the PS, some existing work has provided decentralized FL (DFL) implementations like CDSGD and D-PSGD to facilitate FL in a decentralized topology. However, there are still some problems with these methods, e.g., significant divergence between users' final models in CDSGD and a network-wide model average necessity in D-PSGD. In order to solve these deficiency, this paper devises a new DFL implementation coined as DACFL, where each user trains its model using its own training data and exchanges the intermediate models with its neighbors through a symmetric and doubly stochastic matrix. The DACFL treats the progress of each user's local training as a discrete-time process and employs a first order dynamic average consensus (FODAC) method to track the \textit{average model} in the absence of the PS. In this paper, we also provide a theoretical convergence analysis of DACFL on the premise of i.i.d data to strengthen its rationality. The experimental results on MNIST, Fashion-MNIST and CIFAR-10 validate the feasibility of our solution in both time-invariant and time-varying network topologies, and declare that DACFL outperforms D-PSGD and CDSGD in most cases.
2101.02429
Burak Bartan
Burak Bartan, Mert Pilanci
Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization of Polynomial Activation Neural Networks in Fully Polynomial-Time
null
null
null
null
cs.LG cs.CC math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The training of two-layer neural networks with nonlinear activation functions is an important non-convex optimization problem with numerous applications and promising performance in layerwise deep learning. In this paper, we develop exact convex optimization formulations for two-layer neural networks with second degree polynomial activations based on semidefinite programming. Remarkably, we show that semidefinite lifting is always exact and therefore computational complexity for global optimization is polynomial in the input dimension and sample size for all input data. The developed convex formulations are proven to achieve the same global optimal solution set as their non-convex counterparts. More specifically, the globally optimal two-layer neural network with polynomial activations can be found by solving a semidefinite program (SDP) and decomposing the solution using a procedure we call Neural Decomposition. Moreover, the choice of regularizers plays a crucial role in the computational tractability of neural network training. We show that the standard weight decay regularization formulation is NP-hard, whereas other simple convex penalties render the problem tractable in polynomial time via convex programming. We extend the results beyond the fully connected architecture to different neural network architectures including networks with vector outputs and convolutional architectures with pooling. We provide extensive numerical simulations showing that the standard backpropagation approach often fails to achieve the global optimum of the training loss. The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
[ { "created": "Thu, 7 Jan 2021 08:43:01 GMT", "version": "v1" } ]
2021-01-11
[ [ "Bartan", "Burak", "" ], [ "Pilanci", "Mert", "" ] ]
The training of two-layer neural networks with nonlinear activation functions is an important non-convex optimization problem with numerous applications and promising performance in layerwise deep learning. In this paper, we develop exact convex optimization formulations for two-layer neural networks with second degree polynomial activations based on semidefinite programming. Remarkably, we show that semidefinite lifting is always exact and therefore computational complexity for global optimization is polynomial in the input dimension and sample size for all input data. The developed convex formulations are proven to achieve the same global optimal solution set as their non-convex counterparts. More specifically, the globally optimal two-layer neural network with polynomial activations can be found by solving a semidefinite program (SDP) and decomposing the solution using a procedure we call Neural Decomposition. Moreover, the choice of regularizers plays a crucial role in the computational tractability of neural network training. We show that the standard weight decay regularization formulation is NP-hard, whereas other simple convex penalties render the problem tractable in polynomial time via convex programming. We extend the results beyond the fully connected architecture to different neural network architectures including networks with vector outputs and convolutional architectures with pooling. We provide extensive numerical simulations showing that the standard backpropagation approach often fails to achieve the global optimum of the training loss. The proposed approach is significantly faster to obtain better test accuracy compared to the standard backpropagation procedure.
1803.01504
Dan Xu
Dan Xu, Xavier Alameda-Pineda, Jingkuan Song, Elisa Ricci, Nicu Sebe
Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval
null
null
10.1109/TIP.2018.2837381
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we address the problem of learning robust cross-domain representations for sketch-based image retrieval (SBIR). While most SBIR approaches focus on extracting low- and mid-level descriptors for direct feature matching, recent works have shown the benefit of learning coupled feature representations to describe data from two related sources. However, cross-domain representation learning methods are typically cast into non-convex minimization problems that are difficult to optimize, leading to unsatisfactory performance. Inspired by self-paced learning, a learning methodology designed to overcome convergence issues related to local optima by exploiting the samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced partial curriculum learning (CPPCL) framework. Compared with existing self-paced learning methods which only consider a single modality and cannot deal with prior knowledge, CPPCL is specifically designed to assess the learning pace by jointly handling data from dual sources and modality-specific prior information provided in the form of partial curricula. Additionally, thanks to the learned dictionaries, we demonstrate that the proposed CPPCL embeds robust coupled representations for SBIR. Our approach is extensively evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary SBIR and TU-Berlin Extension datasets), showing superior performance over competing SBIR methods.
[ { "created": "Mon, 5 Mar 2018 05:30:08 GMT", "version": "v1" } ]
2018-08-01
[ [ "Xu", "Dan", "" ], [ "Alameda-Pineda", "Xavier", "" ], [ "Song", "Jingkuan", "" ], [ "Ricci", "Elisa", "" ], [ "Sebe", "Nicu", "" ] ]
In this paper we address the problem of learning robust cross-domain representations for sketch-based image retrieval (SBIR). While most SBIR approaches focus on extracting low- and mid-level descriptors for direct feature matching, recent works have shown the benefit of learning coupled feature representations to describe data from two related sources. However, cross-domain representation learning methods are typically cast into non-convex minimization problems that are difficult to optimize, leading to unsatisfactory performance. Inspired by self-paced learning, a learning methodology designed to overcome convergence issues related to local optima by exploiting the samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced partial curriculum learning (CPPCL) framework. Compared with existing self-paced learning methods which only consider a single modality and cannot deal with prior knowledge, CPPCL is specifically designed to assess the learning pace by jointly handling data from dual sources and modality-specific prior information provided in the form of partial curricula. Additionally, thanks to the learned dictionaries, we demonstrate that the proposed CPPCL embeds robust coupled representations for SBIR. Our approach is extensively evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary SBIR and TU-Berlin Extension datasets), showing superior performance over competing SBIR methods.
1507.04885
Akbar Rafiey
Akbar Rafiey, Jeff Kinne, J\'an Manuch, Arash Rafiey
Ordering with precedence constraints and budget minimization
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a variation of the scheduling with precedence constraints problem that has applications to molecular folding and production management. We are given a bipartite graph $H=(B,S)$. Vertices in $B$ are thought of as goods or services that must be \emph{bought} to produce items in $S$ that are to be \emph{sold}. An edge from $j\in S$ to $i\in B$ indicates that the production of $j$ requires the purchase of $i$. Each vertex in $B$ has a cost, and each vertex in $S$ results in some gain. The goal is to obtain an ordering of $B\cup S$ that respects the precedence constraints and maximizes the minimal net profit encountered as the vertices are processed. We call this optimal value the \emph{budget} or \emph{capital} investment required for the bipartite graph, and refer to our problem as \emph{the bipartite graph ordering problem}. The problem is equivalent to a version of an NP-complete molecular folding problem that has been studied recently [12]. Work on the molecular folding problem has focused on heuristic algorithms and exponential-time exact algorithms for the un-weighted problem where costs are $\pm 1$ and when restricted to graphs arising from RNA folding. The bipartite graph present work seeks exact algorithms for solving the bipartite ordering problem. We demonstrate an algorithm that computes the optimal ordering in time $O^*(2^n)$ when $n$ is the number of vertices in the input bipartite graph. Our main result is a general strategy that can be used to find an optimal ordering in polynomial time for bipartite graphs that satisfy certain properties. We apply the technique to a variety of graph classes, obtaining polynomial-time solutions to the bipartite graph ordering problem for bipartite permutation graphs, trivially perfect, co-bipartite graphs, and trees.
[ { "created": "Fri, 17 Jul 2015 09:11:56 GMT", "version": "v1" }, { "created": "Wed, 22 Jun 2016 01:21:00 GMT", "version": "v2" }, { "created": "Mon, 3 Oct 2016 05:23:37 GMT", "version": "v3" } ]
2016-10-04
[ [ "Rafiey", "Akbar", "" ], [ "Kinne", "Jeff", "" ], [ "Manuch", "Ján", "" ], [ "Rafiey", "Arash", "" ] ]
We introduce a variation of the scheduling with precedence constraints problem that has applications to molecular folding and production management. We are given a bipartite graph $H=(B,S)$. Vertices in $B$ are thought of as goods or services that must be \emph{bought} to produce items in $S$ that are to be \emph{sold}. An edge from $j\in S$ to $i\in B$ indicates that the production of $j$ requires the purchase of $i$. Each vertex in $B$ has a cost, and each vertex in $S$ results in some gain. The goal is to obtain an ordering of $B\cup S$ that respects the precedence constraints and maximizes the minimal net profit encountered as the vertices are processed. We call this optimal value the \emph{budget} or \emph{capital} investment required for the bipartite graph, and refer to our problem as \emph{the bipartite graph ordering problem}. The problem is equivalent to a version of an NP-complete molecular folding problem that has been studied recently [12]. Work on the molecular folding problem has focused on heuristic algorithms and exponential-time exact algorithms for the un-weighted problem where costs are $\pm 1$ and when restricted to graphs arising from RNA folding. The bipartite graph present work seeks exact algorithms for solving the bipartite ordering problem. We demonstrate an algorithm that computes the optimal ordering in time $O^*(2^n)$ when $n$ is the number of vertices in the input bipartite graph. Our main result is a general strategy that can be used to find an optimal ordering in polynomial time for bipartite graphs that satisfy certain properties. We apply the technique to a variety of graph classes, obtaining polynomial-time solutions to the bipartite graph ordering problem for bipartite permutation graphs, trivially perfect, co-bipartite graphs, and trees.
2201.08042
Ervin Dervishaj
Ervin Dervishaj and Paolo Cremonesi
GAN-based Matrix Factorization for Recommender Systems
Accepted at the 37th ACM/SIGAPP Symposium on Applied Computing (SAC '22)
null
10.1145/3477314.3507099
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proposed in 2014, Generative Adversarial Networks (GAN) initiated a fresh interest in generative modelling. They immediately achieved state-of-the-art in image synthesis, image-to-image translation, text-to-image generation, image inpainting and have been used in sciences ranging from medicine to high-energy particle physics. Despite their popularity and ability to learn arbitrary distributions, GAN have not been widely applied in recommender systems (RS). Moreover, only few of the techniques that have introduced GAN in RS have employed them directly as a collaborative filtering (CF) model. In this work we propose a new GAN-based approach that learns user and item latent factors in a matrix factorization setting for the generic top-N recommendation problem. Following the vector-wise GAN training approach for RS introduced by CFGAN, we identify 2 unique issues when utilizing GAN for CF. We propose solutions for both of them by using an autoencoder as discriminator and incorporating an additional loss function for the generator. We evaluate our model, GANMF, through well-known datasets in the RS community and show improvements over traditional CF approaches and GAN-based models. Through an ablation study on the components of GANMF we aim to understand the effects of our architectural choices. Finally, we provide a qualitative evaluation of the matrix factorization performance of GANMF.
[ { "created": "Thu, 20 Jan 2022 08:14:29 GMT", "version": "v1" } ]
2022-01-21
[ [ "Dervishaj", "Ervin", "" ], [ "Cremonesi", "Paolo", "" ] ]
Proposed in 2014, Generative Adversarial Networks (GAN) initiated a fresh interest in generative modelling. They immediately achieved state-of-the-art in image synthesis, image-to-image translation, text-to-image generation, image inpainting and have been used in sciences ranging from medicine to high-energy particle physics. Despite their popularity and ability to learn arbitrary distributions, GAN have not been widely applied in recommender systems (RS). Moreover, only few of the techniques that have introduced GAN in RS have employed them directly as a collaborative filtering (CF) model. In this work we propose a new GAN-based approach that learns user and item latent factors in a matrix factorization setting for the generic top-N recommendation problem. Following the vector-wise GAN training approach for RS introduced by CFGAN, we identify 2 unique issues when utilizing GAN for CF. We propose solutions for both of them by using an autoencoder as discriminator and incorporating an additional loss function for the generator. We evaluate our model, GANMF, through well-known datasets in the RS community and show improvements over traditional CF approaches and GAN-based models. Through an ablation study on the components of GANMF we aim to understand the effects of our architectural choices. Finally, we provide a qualitative evaluation of the matrix factorization performance of GANMF.
2302.12170
Joel Lehman
Elliot Meyerson and Mark J. Nelson and Herbie Bradley and Adam Gaier and Arash Moradi and Amy K. Hoover and Joel Lehman
Language Model Crossover: Variation through Few-Shot Prompting
null
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.
[ { "created": "Thu, 23 Feb 2023 17:12:34 GMT", "version": "v1" }, { "created": "Sat, 7 Oct 2023 17:07:48 GMT", "version": "v2" }, { "created": "Mon, 13 May 2024 23:57:11 GMT", "version": "v3" } ]
2024-05-15
[ [ "Meyerson", "Elliot", "" ], [ "Nelson", "Mark J.", "" ], [ "Bradley", "Herbie", "" ], [ "Gaier", "Adam", "" ], [ "Moradi", "Arash", "" ], [ "Hoover", "Amy K.", "" ], [ "Lehman", "Joel", "" ] ]
This paper pursues the insight that language models naturally enable an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demonstrate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs incorporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation operator, i.e. to prompt a language model with a few text-based genotypes (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes' offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to-image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes representable as text.
2011.04696
Fernando M. Espinoza-Cuadros
Fernando M. Espinoza-Cuadros, Juan M. Perero-Codosero, Javier Ant\'on-Mart\'in, Luis A. Hern\'andez-G\'omez
Speaker De-identification System using Autoencoders and Adversarial Training
null
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fast increase of web services and mobile apps, which collect personal data from users, increases the risk that their privacy may be severely compromised. In particular, the increasing variety of spoken language interfaces and voice assistants empowered by the vertiginous breakthroughs in Deep Learning are prompting important concerns in the European Union to preserve speech data privacy. For instance, an attacker can record speech from users and impersonate them to get access to systems requiring voice identification. Hacking speaker profiles from users is also possible by means of existing technology to extract speaker, linguistic (e.g., dialect) and paralinguistic features (e.g., age) from the speech signal. In order to mitigate these weaknesses, in this paper, we propose a speaker de-identification system based on adversarial training and autoencoders in order to suppress speaker, gender, and accent information from speech. Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system while preserving the intelligibility of the anonymized spoken content.
[ { "created": "Mon, 9 Nov 2020 19:22:05 GMT", "version": "v1" } ]
2021-02-01
[ [ "Espinoza-Cuadros", "Fernando M.", "" ], [ "Perero-Codosero", "Juan M.", "" ], [ "Antón-Martín", "Javier", "" ], [ "Hernández-Gómez", "Luis A.", "" ] ]
The fast increase of web services and mobile apps, which collect personal data from users, increases the risk that their privacy may be severely compromised. In particular, the increasing variety of spoken language interfaces and voice assistants empowered by the vertiginous breakthroughs in Deep Learning are prompting important concerns in the European Union to preserve speech data privacy. For instance, an attacker can record speech from users and impersonate them to get access to systems requiring voice identification. Hacking speaker profiles from users is also possible by means of existing technology to extract speaker, linguistic (e.g., dialect) and paralinguistic features (e.g., age) from the speech signal. In order to mitigate these weaknesses, in this paper, we propose a speaker de-identification system based on adversarial training and autoencoders in order to suppress speaker, gender, and accent information from speech. Experimental results show that combining adversarial learning and autoencoders increase the equal error rate of a speaker verification system while preserving the intelligibility of the anonymized spoken content.
2308.15235
Nicos Isaak
Nicos Isaak
PronounFlow: A Hybrid Approach for Calibrating Pronouns in Sentences
13 pages, 4 figures, 3 tables
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flip through any book or listen to any song lyrics, and you will come across pronouns that, in certain cases, can hinder meaning comprehension, especially for machines. As the role of having cognitive machines becomes pervasive in our lives, numerous systems have been developed to resolve pronouns under various challenges. Commensurate with this, it is believed that having systems able to disambiguate pronouns in sentences will help towards the endowment of machines with commonsense and reasoning abilities like those found in humans. However, one problem these systems face with modern English is the lack of gender pronouns, where people try to alternate by using masculine, feminine, or plural to avoid the whole issue. Since humanity aims to the building of systems in the full-bodied sense we usually reserve for people, what happens when pronouns in written text, like plural or epicene ones, refer to unspecified entities whose gender is not necessarily known? Wouldn't that put extra barriers to existing coreference resolution systems? Towards answering those questions, through the implementation of a neural-symbolic system that utilizes the best of both worlds, we are employing PronounFlow, a system that reads any English sentence with pronouns and entities, identifies which of them are not tied to each other, and makes suggestions on which to use to avoid biases. Undertaken experiments show that PronounFlow not only alternates pronouns in sentences based on the collective human knowledge around us but also considerably helps coreference resolution systems with the pronoun disambiguation process.
[ { "created": "Tue, 29 Aug 2023 11:46:27 GMT", "version": "v1" } ]
2023-08-30
[ [ "Isaak", "Nicos", "" ] ]
Flip through any book or listen to any song lyrics, and you will come across pronouns that, in certain cases, can hinder meaning comprehension, especially for machines. As the role of having cognitive machines becomes pervasive in our lives, numerous systems have been developed to resolve pronouns under various challenges. Commensurate with this, it is believed that having systems able to disambiguate pronouns in sentences will help towards the endowment of machines with commonsense and reasoning abilities like those found in humans. However, one problem these systems face with modern English is the lack of gender pronouns, where people try to alternate by using masculine, feminine, or plural to avoid the whole issue. Since humanity aims to the building of systems in the full-bodied sense we usually reserve for people, what happens when pronouns in written text, like plural or epicene ones, refer to unspecified entities whose gender is not necessarily known? Wouldn't that put extra barriers to existing coreference resolution systems? Towards answering those questions, through the implementation of a neural-symbolic system that utilizes the best of both worlds, we are employing PronounFlow, a system that reads any English sentence with pronouns and entities, identifies which of them are not tied to each other, and makes suggestions on which to use to avoid biases. Undertaken experiments show that PronounFlow not only alternates pronouns in sentences based on the collective human knowledge around us but also considerably helps coreference resolution systems with the pronoun disambiguation process.
0906.1086
Jean-Marie Vanherpe
Jean-Luc Fouquet (LIFO), Jean-Marie Vanherpe (LIFO)
On Fulkerson conjecture
Accepted for publication in Discussiones Mathematicae Graph Theory; Discussiones Mathematicae Graph Theory (2010) xxx-yyy
Discussiones Mathematicae Graph Theory 31, 2 (2011) 253-272
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
If $G$ is a bridgeless cubic graph, Fulkerson conjectured that we can find 6 perfect matchings (a{\em Fulkerson covering}) with the property that every edge of $G$ is contained in exactly two of them. A consequence of the Fulkerson conjecture would be that every bridgeless cubic graph has 3 perfect matchings with empty intersection (this problem is known as the Fan Raspaud Conjecture). A {\em FR-triple} is a set of 3 such perfect matchings. We show here how to derive a Fulkerson covering from two FR-triples. Moreover, we give a simple proof that the Fulkerson conjecture holds true for some classes of well known snarks.
[ { "created": "Fri, 5 Jun 2009 10:54:55 GMT", "version": "v1" }, { "created": "Mon, 31 May 2010 09:12:31 GMT", "version": "v2" } ]
2011-04-01
[ [ "Fouquet", "Jean-Luc", "", "LIFO" ], [ "Vanherpe", "Jean-Marie", "", "LIFO" ] ]
If $G$ is a bridgeless cubic graph, Fulkerson conjectured that we can find 6 perfect matchings (a{\em Fulkerson covering}) with the property that every edge of $G$ is contained in exactly two of them. A consequence of the Fulkerson conjecture would be that every bridgeless cubic graph has 3 perfect matchings with empty intersection (this problem is known as the Fan Raspaud Conjecture). A {\em FR-triple} is a set of 3 such perfect matchings. We show here how to derive a Fulkerson covering from two FR-triples. Moreover, we give a simple proof that the Fulkerson conjecture holds true for some classes of well known snarks.
2406.05290
Sergiy Shelyag
Abhiram Anand Thiruthummal, Sergiy Shelyag, Eun-jin Kim
Extremization to Fine Tune Physics Informed Neural Networks for Solving Boundary Value Problems
Accepted for publication in CNSNS
null
null
null
cs.LG cs.CE cs.NA math.NA physics.comp-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose a novel method for fast and accurate training of physics-informed neural networks (PINNs) to find solutions to boundary value problems (BVPs) and initial boundary value problems (IBVPs). By combining the methods of training deep neural networks (DNNs) and Extreme Learning Machines (ELMs), we develop a model which has the expressivity of DNNs with the fine-tuning ability of ELMs. We showcase the superiority of our proposed method by solving several BVPs and IBVPs which include linear and non-linear ordinary differential equations (ODEs), partial differential equations (PDEs) and coupled PDEs. The examples we consider include a stiff coupled ODE system where traditional numerical methods fail, a 3+1D non-linear PDE, Kovasznay flow and Taylor-Green vortex solutions to incompressible Navier-Stokes equations and pure advection solution of 1+1 D compressible Euler equation. The Theory of Functional Connections (TFC) is used to exactly impose initial and boundary conditions (IBCs) of (I)BVPs on PINNs. We propose a modification to the TFC framework named Reduced TFC and show a significant improvement in the training and inference time of PINNs compared to IBCs imposed using TFC. Furthermore, Reduced TFC is shown to be able to generalize to more complex boundary geometries which is not possible with TFC. We also introduce a method of applying boundary conditions at infinity for BVPs and numerically solve the pure advection in 1+1 D Euler equations using these boundary conditions.
[ { "created": "Fri, 7 Jun 2024 23:25:13 GMT", "version": "v1" } ]
2024-06-11
[ [ "Thiruthummal", "Abhiram Anand", "" ], [ "Shelyag", "Sergiy", "" ], [ "Kim", "Eun-jin", "" ] ]
We propose a novel method for fast and accurate training of physics-informed neural networks (PINNs) to find solutions to boundary value problems (BVPs) and initial boundary value problems (IBVPs). By combining the methods of training deep neural networks (DNNs) and Extreme Learning Machines (ELMs), we develop a model which has the expressivity of DNNs with the fine-tuning ability of ELMs. We showcase the superiority of our proposed method by solving several BVPs and IBVPs which include linear and non-linear ordinary differential equations (ODEs), partial differential equations (PDEs) and coupled PDEs. The examples we consider include a stiff coupled ODE system where traditional numerical methods fail, a 3+1D non-linear PDE, Kovasznay flow and Taylor-Green vortex solutions to incompressible Navier-Stokes equations and pure advection solution of 1+1 D compressible Euler equation. The Theory of Functional Connections (TFC) is used to exactly impose initial and boundary conditions (IBCs) of (I)BVPs on PINNs. We propose a modification to the TFC framework named Reduced TFC and show a significant improvement in the training and inference time of PINNs compared to IBCs imposed using TFC. Furthermore, Reduced TFC is shown to be able to generalize to more complex boundary geometries which is not possible with TFC. We also introduce a method of applying boundary conditions at infinity for BVPs and numerically solve the pure advection in 1+1 D Euler equations using these boundary conditions.
1901.08274
Zaiqiang Wu
Zaiqiang Wu, Wei Jiang, Hao Luo, Lin Cheng
A Novel Self-Intersection Penalty Term for Statistical Body Shape Models and Its Applications in 3D Pose Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical body shape models are widely used in 3D pose estimation due to their low-dimensional parameters representation. However, it is difficult to avoid self-intersection between body parts accurately. Motivated by this fact, we proposed a novel self-intersection penalty term for statistical body shape models applied in 3D pose estimation. To avoid the trouble of computing self-intersection for complex surfaces like the body meshes, the gradient of our proposed self-intersection penalty term is manually derived from the perspective of geometry. First, the self-intersection penalty term is defined as the volume of the self-intersection region. To calculate the partial derivatives with respect to the coordinates of the vertices, we employed detection rays to divide vertices of statistical body shape models into different groups depending on whether the vertex is in the region of self-intersection. Second, the partial derivatives could be easily derived by the normal vectors of neighboring triangles of the vertices. Finally, this penalty term could be applied in gradient-based optimization algorithms to remove the self-intersection of triangular meshes without using any approximation. Qualitative and quantitative evaluations were conducted to demonstrate the effectiveness and generality of our proposed method compared with previous approaches. The experimental results show that our proposed penalty term can avoid self-intersection to exclude unreasonable predictions and improves the accuracy of 3D pose estimation indirectly. Further more, the proposed method could be employed universally in triangular mesh based 3D reconstruction.
[ { "created": "Thu, 24 Jan 2019 08:19:37 GMT", "version": "v1" } ]
2019-01-25
[ [ "Wu", "Zaiqiang", "" ], [ "Jiang", "Wei", "" ], [ "Luo", "Hao", "" ], [ "Cheng", "Lin", "" ] ]
Statistical body shape models are widely used in 3D pose estimation due to their low-dimensional parameters representation. However, it is difficult to avoid self-intersection between body parts accurately. Motivated by this fact, we proposed a novel self-intersection penalty term for statistical body shape models applied in 3D pose estimation. To avoid the trouble of computing self-intersection for complex surfaces like the body meshes, the gradient of our proposed self-intersection penalty term is manually derived from the perspective of geometry. First, the self-intersection penalty term is defined as the volume of the self-intersection region. To calculate the partial derivatives with respect to the coordinates of the vertices, we employed detection rays to divide vertices of statistical body shape models into different groups depending on whether the vertex is in the region of self-intersection. Second, the partial derivatives could be easily derived by the normal vectors of neighboring triangles of the vertices. Finally, this penalty term could be applied in gradient-based optimization algorithms to remove the self-intersection of triangular meshes without using any approximation. Qualitative and quantitative evaluations were conducted to demonstrate the effectiveness and generality of our proposed method compared with previous approaches. The experimental results show that our proposed penalty term can avoid self-intersection to exclude unreasonable predictions and improves the accuracy of 3D pose estimation indirectly. Further more, the proposed method could be employed universally in triangular mesh based 3D reconstruction.
2304.09406
Minh Hua
Minh Hua, Rita Raley
How to Do Things with Deep Learning Code
Accepted for publication in Digital Humanities Quarterly
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The premise of this article is that a basic understanding of the composition and functioning of large language models is critically urgent. To that end, we extract a representational map of OpenAI's GPT-2 with what we articulate as two classes of deep learning code, that which pertains to the model and that which underwrites applications built around the model. We then verify this map through case studies of two popular GPT-2 applications: the text adventure game, AI Dungeon, and the language art project, This Word Does Not Exist. Such an exercise allows us to test the potential of Critical Code Studies when the object of study is deep learning code and to demonstrate the validity of code as an analytical focus for researchers in the subfields of Critical Artificial Intelligence and Critical Machine Learning Studies. More broadly, however, our work draws attention to the means by which ordinary users might interact with, and even direct, the behavior of deep learning systems, and by extension works toward demystifying some of the auratic mystery of "AI." What is at stake is the possibility of achieving an informed sociotechnical consensus about the responsible applications of large language models, as well as a more expansive sense of their creative capabilities-indeed, understanding how and where engagement occurs allows all of us to become more active participants in the development of machine learning systems.
[ { "created": "Wed, 19 Apr 2023 03:46:12 GMT", "version": "v1" } ]
2023-04-20
[ [ "Hua", "Minh", "" ], [ "Raley", "Rita", "" ] ]
The premise of this article is that a basic understanding of the composition and functioning of large language models is critically urgent. To that end, we extract a representational map of OpenAI's GPT-2 with what we articulate as two classes of deep learning code, that which pertains to the model and that which underwrites applications built around the model. We then verify this map through case studies of two popular GPT-2 applications: the text adventure game, AI Dungeon, and the language art project, This Word Does Not Exist. Such an exercise allows us to test the potential of Critical Code Studies when the object of study is deep learning code and to demonstrate the validity of code as an analytical focus for researchers in the subfields of Critical Artificial Intelligence and Critical Machine Learning Studies. More broadly, however, our work draws attention to the means by which ordinary users might interact with, and even direct, the behavior of deep learning systems, and by extension works toward demystifying some of the auratic mystery of "AI." What is at stake is the possibility of achieving an informed sociotechnical consensus about the responsible applications of large language models, as well as a more expansive sense of their creative capabilities-indeed, understanding how and where engagement occurs allows all of us to become more active participants in the development of machine learning systems.
2204.06127
Mingshuo Nie
Mingshuo Nie, Dongming Chen, Dongqi Wang
Reinforcement learning on graphs: A survey
Accepted by IEEE Transactions on Emerging Topics in Computational Intelligence
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph mining tasks arise from many different application domains, ranging from social networks, transportation to E-commerce, etc., which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph data mining tasks. However, these graph mining methods and RL models are dispersed in different research areas, which makes it hard to compare them. In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains and summarize the method descriptions, open-source codes, and benchmark datasets of GRL methods. Furthermore, we propose important directions and challenges to be solved in the future. As far as we know, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. In addition, we create an online open-source for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.
[ { "created": "Wed, 13 Apr 2022 01:25:58 GMT", "version": "v1" }, { "created": "Wed, 27 Apr 2022 12:50:20 GMT", "version": "v2" }, { "created": "Fri, 11 Nov 2022 11:19:28 GMT", "version": "v3" }, { "created": "Sun, 15 Jan 2023 01:44:48 GMT", "version": "v4" } ]
2023-01-18
[ [ "Nie", "Mingshuo", "" ], [ "Chen", "Dongming", "" ], [ "Wang", "Dongqi", "" ] ]
Graph mining tasks arise from many different application domains, ranging from social networks, transportation to E-commerce, etc., which have been receiving great attention from the theoretical and algorithmic design communities in recent years, and there has been some pioneering work employing the research-rich Reinforcement Learning (RL) techniques to address graph data mining tasks. However, these graph mining methods and RL models are dispersed in different research areas, which makes it hard to compare them. In this survey, we provide a comprehensive overview of RL and graph mining methods and generalize these methods to Graph Reinforcement Learning (GRL) as a unified formulation. We further discuss the applications of GRL methods across various domains and summarize the method descriptions, open-source codes, and benchmark datasets of GRL methods. Furthermore, we propose important directions and challenges to be solved in the future. As far as we know, this is the latest work on a comprehensive survey of GRL, this work provides a global view and a learning resource for scholars. In addition, we create an online open-source for both interested scholars who want to enter this rapidly developing domain and experts who would like to compare GRL methods.
2301.01808
Adar Kahana
Adar Kahana and Oren Elisha
MessageNet: Message Classification using Natural Language Processing and Meta-data
null
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper we propose a new Deep Learning (DL) approach for message classification. Our method is based on the state-of-the-art Natural Language Processing (NLP) building blocks, combined with a novel technique for infusing the meta-data input that is typically available in messages such as the sender information, timestamps, attached image, audio, affiliations, and more. As we demonstrate throughout the paper, going beyond the mere text by leveraging all available channels in the message, could yield an improved representation and higher classification accuracy. To achieve message representation, each type of input is processed in a dedicated block in the neural network architecture that is suitable for the data type. Such an implementation enables training all blocks together simultaneously, and forming cross channels features in the network. We show in the Experiments Section that in some cases, message's meta-data holds an additional information that cannot be extracted just from the text, and when using this information we achieve better performance. Furthermore, we demonstrate that our multi-modality block approach outperforms other approaches for injecting the meta data to the the text classifier.
[ { "created": "Wed, 4 Jan 2023 20:11:00 GMT", "version": "v1" } ]
2023-01-06
[ [ "Kahana", "Adar", "" ], [ "Elisha", "Oren", "" ] ]
In this paper we propose a new Deep Learning (DL) approach for message classification. Our method is based on the state-of-the-art Natural Language Processing (NLP) building blocks, combined with a novel technique for infusing the meta-data input that is typically available in messages such as the sender information, timestamps, attached image, audio, affiliations, and more. As we demonstrate throughout the paper, going beyond the mere text by leveraging all available channels in the message, could yield an improved representation and higher classification accuracy. To achieve message representation, each type of input is processed in a dedicated block in the neural network architecture that is suitable for the data type. Such an implementation enables training all blocks together simultaneously, and forming cross channels features in the network. We show in the Experiments Section that in some cases, message's meta-data holds an additional information that cannot be extracted just from the text, and when using this information we achieve better performance. Furthermore, we demonstrate that our multi-modality block approach outperforms other approaches for injecting the meta data to the the text classifier.
1611.04021
Tseng-Hung Chen
Kuo-Hao Zeng, Tseng-Hung Chen, Ching-Yao Chuang, Yuan-Hong Liao, Juan Carlos Niebles, Min Sun
Leveraging Video Descriptions to Learn Video Question Answering
7 pages, 5 figures. Accepted to AAAI 2017. Camera-ready version
null
null
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a scalable approach to learn video-based question answering (QA): answer a "free-form natural language question" about a video content. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.
[ { "created": "Sat, 12 Nov 2016 17:15:57 GMT", "version": "v1" }, { "created": "Mon, 19 Dec 2016 16:07:33 GMT", "version": "v2" } ]
2016-12-20
[ [ "Zeng", "Kuo-Hao", "" ], [ "Chen", "Tseng-Hung", "" ], [ "Chuang", "Ching-Yao", "" ], [ "Liao", "Yuan-Hong", "" ], [ "Niebles", "Juan Carlos", "" ], [ "Sun", "Min", "" ] ]
We propose a scalable approach to learn video-based question answering (QA): answer a "free-form natural language question" about a video content. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.
2009.06184
Yifan Wang
Yifan Wang, Guoli Yan, Haikuan Zhu, Sagar Buch, Ying Wang, Ewart Mark Haacke, Jing Hua, and Zichun Zhong
VC-Net: Deep Volume-Composition Networks for Segmentation and Visualization of Highly Sparse and Noisy Image Data
15 pages, 10 figures, proceeding to IEEE Transactions on Visualization and Computer Graphics (TVCG) (IEEE SciVis 2020), October, 2020
null
null
null
cs.GR cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motivation of our work is to present a new visualization-guided computing paradigm to combine direct 3D volume processing and volume rendered clues for effective 3D exploration such as extracting and visualizing microstructures in-vivo. However, it is still challenging to extract and visualize high fidelity 3D vessel structure due to its high sparseness, noisiness, and complex topology variations. In this paper, we present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvasculature through embedding the image composition, generated by maximum intensity projection (MIP), into 3D volume image learning to enhance the performance. The core novelty is to automatically leverage the volume visualization technique (MIP) to enhance the 3D data exploration at deep learning level. The MIP embedding features can enhance the local vessel signal and are adaptive to the geometric variability and scalability of vessels, which is crucial in microvascular tracking. A multi-stream convolutional neural network is proposed to learn the 3D volume and 2D MIP features respectively and then explore their inter-dependencies in a joint volume-composition embedding space by unprojecting the MIP features into 3D volume embedding space. The proposed framework can better capture small / micro vessels and improve vessel connectivity. To our knowledge, this is the first deep learning framework to construct a joint convolutional embedding space, where the computed vessel probabilities from volume rendering based 2D projection and 3D volume can be explored and integrated synergistically. Experimental results are compared with the traditional 3D vessel segmentation methods and the deep learning state-of-the-art on public and real patient (micro-)cerebrovascular image datasets. Our method demonstrates the potential in a powerful MR arteriogram and venogram diagnosis of vascular diseases.
[ { "created": "Mon, 14 Sep 2020 04:15:02 GMT", "version": "v1" } ]
2020-09-15
[ [ "Wang", "Yifan", "" ], [ "Yan", "Guoli", "" ], [ "Zhu", "Haikuan", "" ], [ "Buch", "Sagar", "" ], [ "Wang", "Ying", "" ], [ "Haacke", "Ewart Mark", "" ], [ "Hua", "Jing", "" ], [ "Zhong", "Zichun", "" ] ]
The motivation of our work is to present a new visualization-guided computing paradigm to combine direct 3D volume processing and volume rendered clues for effective 3D exploration such as extracting and visualizing microstructures in-vivo. However, it is still challenging to extract and visualize high fidelity 3D vessel structure due to its high sparseness, noisiness, and complex topology variations. In this paper, we present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvasculature through embedding the image composition, generated by maximum intensity projection (MIP), into 3D volume image learning to enhance the performance. The core novelty is to automatically leverage the volume visualization technique (MIP) to enhance the 3D data exploration at deep learning level. The MIP embedding features can enhance the local vessel signal and are adaptive to the geometric variability and scalability of vessels, which is crucial in microvascular tracking. A multi-stream convolutional neural network is proposed to learn the 3D volume and 2D MIP features respectively and then explore their inter-dependencies in a joint volume-composition embedding space by unprojecting the MIP features into 3D volume embedding space. The proposed framework can better capture small / micro vessels and improve vessel connectivity. To our knowledge, this is the first deep learning framework to construct a joint convolutional embedding space, where the computed vessel probabilities from volume rendering based 2D projection and 3D volume can be explored and integrated synergistically. Experimental results are compared with the traditional 3D vessel segmentation methods and the deep learning state-of-the-art on public and real patient (micro-)cerebrovascular image datasets. Our method demonstrates the potential in a powerful MR arteriogram and venogram diagnosis of vascular diseases.
1808.00435
Sheng Chen
Sheng Chen, Jia Guo, Yang Liu, Xiang Gao, Zhen Han
Global Norm-Aware Pooling for Pose-Robust Face Recognition at Low False Positive Rate
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel Global Norm-Aware Pooling (GNAP) block, which reweights local features in a convolutional neural network (CNN) adaptively according to their L2 norms and outputs a global feature vector with a global average pooling layer. Our GNAP block is designed to give dynamic weights to local features in different spatial positions without losing spatial symmetry. We use a GNAP block in a face feature embedding CNN to produce discriminative face feature vectors for pose-robust face recognition. The GNAP block is of very cheap computational cost, but it is very powerful for frontal-profile face recognition. Under the CFP frontal-profile protocol, the GNAP block can not only reduce EER dramatically but also boost TPR@FPR=0.1% (TPR i.e. True Positive Rate, FPR i.e. False Positive Rate) substantially. Our experiments show that the GNAP block greatly promotes pose-robust face recognition over the base model especially at low false positive rate.
[ { "created": "Wed, 1 Aug 2018 17:32:31 GMT", "version": "v1" } ]
2018-08-02
[ [ "Chen", "Sheng", "" ], [ "Guo", "Jia", "" ], [ "Liu", "Yang", "" ], [ "Gao", "Xiang", "" ], [ "Han", "Zhen", "" ] ]
In this paper, we propose a novel Global Norm-Aware Pooling (GNAP) block, which reweights local features in a convolutional neural network (CNN) adaptively according to their L2 norms and outputs a global feature vector with a global average pooling layer. Our GNAP block is designed to give dynamic weights to local features in different spatial positions without losing spatial symmetry. We use a GNAP block in a face feature embedding CNN to produce discriminative face feature vectors for pose-robust face recognition. The GNAP block is of very cheap computational cost, but it is very powerful for frontal-profile face recognition. Under the CFP frontal-profile protocol, the GNAP block can not only reduce EER dramatically but also boost TPR@FPR=0.1% (TPR i.e. True Positive Rate, FPR i.e. False Positive Rate) substantially. Our experiments show that the GNAP block greatly promotes pose-robust face recognition over the base model especially at low false positive rate.
1902.09968
Runsheng Zhang
Runsheng Zhang, Yaping Huang, Mengyang Pu, Jian Zhang, Qingji Guan, Qi Zou, Haibin Ling
Object Discovery From a Single Unlabeled Image by Mining Frequent Itemset With Multi-scale Features
null
null
10.1109/TIP.2020.3015543
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TThe goal of our work is to discover dominant objects in a very general setting where only a single unlabeled image is given. This is far more challenge than typical co-localization or weakly-supervised localization tasks. To tackle this problem, we propose a simple but effective pattern mining-based method, called Object Location Mining (OLM), which exploits the advantages of data mining and feature representation of pre-trained convolutional neural networks (CNNs). Specifically, we first convert the feature maps from a pre-trained CNN model into a set of transactions, and then discovers frequent patterns from transaction database through pattern mining techniques. We observe that those discovered patterns, i.e., co-occurrence highlighted regions, typically hold appearance and spatial consistency. Motivated by this observation, we can easily discover and localize possible objects by merging relevant meaningful patterns. Extensive experiments on a variety of benchmarks demonstrate that OLM achieves competitive localization performance compared with the state-of-the-art methods. We also evaluate our approach compared with unsupervised saliency detection methods and achieves competitive results on seven benchmark datasets. Moreover, we conduct experiments on fine-grained classification to show that our proposed method can locate the entire object and parts accurately, which can benefit to improving the classification results significantly.
[ { "created": "Tue, 26 Feb 2019 14:37:01 GMT", "version": "v1" }, { "created": "Tue, 24 Sep 2019 11:19:00 GMT", "version": "v2" }, { "created": "Sat, 8 Aug 2020 05:05:12 GMT", "version": "v3" } ]
2023-07-19
[ [ "Zhang", "Runsheng", "" ], [ "Huang", "Yaping", "" ], [ "Pu", "Mengyang", "" ], [ "Zhang", "Jian", "" ], [ "Guan", "Qingji", "" ], [ "Zou", "Qi", "" ], [ "Ling", "Haibin", "" ] ]
TThe goal of our work is to discover dominant objects in a very general setting where only a single unlabeled image is given. This is far more challenge than typical co-localization or weakly-supervised localization tasks. To tackle this problem, we propose a simple but effective pattern mining-based method, called Object Location Mining (OLM), which exploits the advantages of data mining and feature representation of pre-trained convolutional neural networks (CNNs). Specifically, we first convert the feature maps from a pre-trained CNN model into a set of transactions, and then discovers frequent patterns from transaction database through pattern mining techniques. We observe that those discovered patterns, i.e., co-occurrence highlighted regions, typically hold appearance and spatial consistency. Motivated by this observation, we can easily discover and localize possible objects by merging relevant meaningful patterns. Extensive experiments on a variety of benchmarks demonstrate that OLM achieves competitive localization performance compared with the state-of-the-art methods. We also evaluate our approach compared with unsupervised saliency detection methods and achieves competitive results on seven benchmark datasets. Moreover, we conduct experiments on fine-grained classification to show that our proposed method can locate the entire object and parts accurately, which can benefit to improving the classification results significantly.
2007.00413
Yamin Li
Yamin Li, Kai Tang, Dong He, Xiangyu Wang
Multi-Axis Support-Free Printing of Freeform Parts with Lattice Infill Structures
arXiv admin note: text overlap with arXiv:2003.05938
null
null
null
cs.GR cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In additive manufacturing, infill structures are commonly used to reduce the weight and cost of a solid part. Currently, most infill structure generation methods are based on the conventional 2.5-axis printing configuration, which, although able to satisfy the self-supporting condition on the infills, suffer from the well-known stair-case effect on the finished surface and the need of extensive support for overhang features. In this paper, based on the emerging continuous multi-axis printing configuration, we present a new lattice infill structure generation algorithm, which is able to achieve both the self-supporting condition for the infills and the support-free requirement at the boundary surface of the part. The algorithm critically relies on the use of three mutually orthogonal geodesic distance fields that are embedded in the tetrahedral mesh of the solid model. The intersection between the iso-geodesic distance surfaces of these three geodesic distance fields naturally forms the desired lattice of infill structure, while the density of the infills can be conveniently controlled by adjusting the iso-values. The lattice infill pattern in each curved slicing layer is trimmed to conform to an Eulerian graph so to generate a continuous printing path, which can effectively reduce the nozzle retractions during the printing process. In addition, to cater to the collision-free requirement and to improve the printing efficiency, we also propose a printing sequence optimization algorithm for determining a collision-free order of printing of the connected lattice infills, which seeks to reduce the air-move length of the nozzle. Ample experiments in both computer simulation and physical printing are performed, and the results give a preliminary confirmation of the advantages of our methodology.
[ { "created": "Tue, 30 Jun 2020 06:08:00 GMT", "version": "v1" } ]
2020-07-02
[ [ "Li", "Yamin", "" ], [ "Tang", "Kai", "" ], [ "He", "Dong", "" ], [ "Wang", "Xiangyu", "" ] ]
In additive manufacturing, infill structures are commonly used to reduce the weight and cost of a solid part. Currently, most infill structure generation methods are based on the conventional 2.5-axis printing configuration, which, although able to satisfy the self-supporting condition on the infills, suffer from the well-known stair-case effect on the finished surface and the need of extensive support for overhang features. In this paper, based on the emerging continuous multi-axis printing configuration, we present a new lattice infill structure generation algorithm, which is able to achieve both the self-supporting condition for the infills and the support-free requirement at the boundary surface of the part. The algorithm critically relies on the use of three mutually orthogonal geodesic distance fields that are embedded in the tetrahedral mesh of the solid model. The intersection between the iso-geodesic distance surfaces of these three geodesic distance fields naturally forms the desired lattice of infill structure, while the density of the infills can be conveniently controlled by adjusting the iso-values. The lattice infill pattern in each curved slicing layer is trimmed to conform to an Eulerian graph so to generate a continuous printing path, which can effectively reduce the nozzle retractions during the printing process. In addition, to cater to the collision-free requirement and to improve the printing efficiency, we also propose a printing sequence optimization algorithm for determining a collision-free order of printing of the connected lattice infills, which seeks to reduce the air-move length of the nozzle. Ample experiments in both computer simulation and physical printing are performed, and the results give a preliminary confirmation of the advantages of our methodology.
2005.10039
Hinrikus Wolf
Tobias Schumacher, Hinrikus Wolf, Martin Ritzert, Florian Lemmerich, Jan Bachmann, Florian Frantzen, Max Klabunde, Martin Grohe, Markus Strohmaier
The Effects of Randomness on the Stability of Node Embeddings
null
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We systematically evaluate the (in-)stability of state-of-the-art node embedding algorithms due to randomness, i.e., the random variation of their outcomes given identical algorithms and graphs. We apply five node embeddings algorithms---HOPE, LINE, node2vec, SDNE, and GraphSAGE---to synthetic and empirical graphs and assess their stability under randomness with respect to (i) the geometry of embedding spaces as well as (ii) their performance in downstream tasks. We find significant instabilities in the geometry of embedding spaces independent of the centrality of a node. In the evaluation of downstream tasks, we find that the accuracy of node classification seems to be unaffected by random seeding while the actual classification of nodes can vary significantly. This suggests that instability effects need to be taken into account when working with node embeddings. Our work is relevant for researchers and engineers interested in the effectiveness, reliability, and reproducibility of node embedding approaches.
[ { "created": "Wed, 20 May 2020 13:36:09 GMT", "version": "v1" } ]
2020-05-21
[ [ "Schumacher", "Tobias", "" ], [ "Wolf", "Hinrikus", "" ], [ "Ritzert", "Martin", "" ], [ "Lemmerich", "Florian", "" ], [ "Bachmann", "Jan", "" ], [ "Frantzen", "Florian", "" ], [ "Klabunde", "Max", "" ], [ "Grohe", "Martin", "" ], [ "Strohmaier", "Markus", "" ] ]
We systematically evaluate the (in-)stability of state-of-the-art node embedding algorithms due to randomness, i.e., the random variation of their outcomes given identical algorithms and graphs. We apply five node embeddings algorithms---HOPE, LINE, node2vec, SDNE, and GraphSAGE---to synthetic and empirical graphs and assess their stability under randomness with respect to (i) the geometry of embedding spaces as well as (ii) their performance in downstream tasks. We find significant instabilities in the geometry of embedding spaces independent of the centrality of a node. In the evaluation of downstream tasks, we find that the accuracy of node classification seems to be unaffected by random seeding while the actual classification of nodes can vary significantly. This suggests that instability effects need to be taken into account when working with node embeddings. Our work is relevant for researchers and engineers interested in the effectiveness, reliability, and reproducibility of node embedding approaches.
2105.11879
Marcin Namysl
Marcin Namysl, Alexander M. Esser, Sven Behnke, Joachim K\"ohler
Flexible Table Recognition and Semantic Interpretation System
Accepted for publication in Proceedings of the 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Table extraction is an important but still unsolved problem. In this paper, we introduce a flexible and modular table extraction system. We develop two rule-based algorithms that perform the complete table recognition process, including table detection and segmentation, and support the most frequent table formats. Moreover, to incorporate the extraction of semantic information, we develop a graph-based table interpretation method. We conduct extensive experiments on the challenging table recognition benchmarks ICDAR 2013 and ICDAR 2019, achieving results competitive with state-of-the-art approaches. Our complete information extraction system exhibited a high F1 score of 0.7380. To support future research on information extraction from documents, we make the resources (ground-truth annotations, evaluation scripts, algorithm parameters) from our table interpretation experiment publicly available.
[ { "created": "Tue, 25 May 2021 12:31:02 GMT", "version": "v1" }, { "created": "Thu, 2 Dec 2021 17:33:35 GMT", "version": "v2" } ]
2021-12-03
[ [ "Namysl", "Marcin", "" ], [ "Esser", "Alexander M.", "" ], [ "Behnke", "Sven", "" ], [ "Köhler", "Joachim", "" ] ]
Table extraction is an important but still unsolved problem. In this paper, we introduce a flexible and modular table extraction system. We develop two rule-based algorithms that perform the complete table recognition process, including table detection and segmentation, and support the most frequent table formats. Moreover, to incorporate the extraction of semantic information, we develop a graph-based table interpretation method. We conduct extensive experiments on the challenging table recognition benchmarks ICDAR 2013 and ICDAR 2019, achieving results competitive with state-of-the-art approaches. Our complete information extraction system exhibited a high F1 score of 0.7380. To support future research on information extraction from documents, we make the resources (ground-truth annotations, evaluation scripts, algorithm parameters) from our table interpretation experiment publicly available.
2008.08791
Monica Perusquia-Hernandez
Monica Perusquia-Hernandez, Felix Dollack, Chun Kwang Tan, Shushi Namba, Saho Ayabe-Kanamura, Kenji Suzuki
Facial movement synergies and Action Unit detection from distal wearable Electromyography and Computer Vision
11 pages, 11 figures, 2 tables
null
null
null
cs.HC cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distal facial Electromyography (EMG) can be used to detect smiles and frowns with reasonable accuracy. It capitalizes on volume conduction to detect relevant muscle activity, even when the electrodes are not placed directly on the source muscle. The main advantage of this method is to prevent occlusion and obstruction of the facial expression production, whilst allowing EMG measurements. However, measuring EMG distally entails that the exact source of the facial movement is unknown. We propose a novel method to estimate specific Facial Action Units (AUs) from distal facial EMG and Computer Vision (CV). This method is based on Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF), and sorting of the resulting components to determine which is the most likely to correspond to each CV-labeled action unit (AU). Performance on the detection of AU06 (Orbicularis Oculi) and AU12 (Zygomaticus Major) was estimated by calculating the agreement with Human Coders. The results of our proposed algorithm showed an accuracy of 81% and a Cohen's Kappa of 0.49 for AU6; and accuracy of 82% and a Cohen's Kappa of 0.53 for AU12. This demonstrates the potential of distal EMG to detect individual facial movements. Using this multimodal method, several AU synergies were identified. We quantified the co-occurrence and timing of AU6 and AU12 in posed and spontaneous smiles using the human-coded labels, and for comparison, using the continuous CV-labels. The co-occurrence analysis was also performed on the EMG-based labels to uncover the relationship between muscle synergies and the kinematics of visible facial movement.
[ { "created": "Thu, 20 Aug 2020 06:09:03 GMT", "version": "v1" } ]
2020-08-21
[ [ "Perusquia-Hernandez", "Monica", "" ], [ "Dollack", "Felix", "" ], [ "Tan", "Chun Kwang", "" ], [ "Namba", "Shushi", "" ], [ "Ayabe-Kanamura", "Saho", "" ], [ "Suzuki", "Kenji", "" ] ]
Distal facial Electromyography (EMG) can be used to detect smiles and frowns with reasonable accuracy. It capitalizes on volume conduction to detect relevant muscle activity, even when the electrodes are not placed directly on the source muscle. The main advantage of this method is to prevent occlusion and obstruction of the facial expression production, whilst allowing EMG measurements. However, measuring EMG distally entails that the exact source of the facial movement is unknown. We propose a novel method to estimate specific Facial Action Units (AUs) from distal facial EMG and Computer Vision (CV). This method is based on Independent Component Analysis (ICA), Non-Negative Matrix Factorization (NNMF), and sorting of the resulting components to determine which is the most likely to correspond to each CV-labeled action unit (AU). Performance on the detection of AU06 (Orbicularis Oculi) and AU12 (Zygomaticus Major) was estimated by calculating the agreement with Human Coders. The results of our proposed algorithm showed an accuracy of 81% and a Cohen's Kappa of 0.49 for AU6; and accuracy of 82% and a Cohen's Kappa of 0.53 for AU12. This demonstrates the potential of distal EMG to detect individual facial movements. Using this multimodal method, several AU synergies were identified. We quantified the co-occurrence and timing of AU6 and AU12 in posed and spontaneous smiles using the human-coded labels, and for comparison, using the continuous CV-labels. The co-occurrence analysis was also performed on the EMG-based labels to uncover the relationship between muscle synergies and the kinematics of visible facial movement.
2208.05343
Pino Caballero-Gil
Pino Caballero-Gil, Francisco Mart\'in-Fern\'andez, C\'andido Caballero-Gil
Using query frequencies in tree-based revocation for certificateless authentication in VANETs
null
The 9th International Conference for Internet Technology and Secured Transactions (ICITST-2014), pp. 268-273, 2014
10.1109/ICITST.2014.7038819
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Revocation of dishonest users is not an easy problem. This paper proposes a new way to manage revocation of pseudonyms in vehicular ad-hoc networks when using identity-based authentication to increase efficiency and security through certificateless authentication. In order to improve the performance of revocation lists, this paper proposes the use of a data structure based on authenticated dynamic hash k-ary trees and the frequency with which revoked pseudonyms are consulted. The use of the knowledge about the frequency of consultation of revoked pseudonyms allows an easier access to the most popular revoked pseudonyms to the detriment of revoked pseudonyms that are the least consulted. Accordingly, the proposal is especially useful in urban environments where there are vehicles that spend more time on road than others, such as public service vehicles.
[ { "created": "Sat, 6 Aug 2022 18:45:34 GMT", "version": "v1" } ]
2022-08-11
[ [ "Caballero-Gil", "Pino", "" ], [ "Martín-Fernández", "Francisco", "" ], [ "Caballero-Gil", "Cándido", "" ] ]
Revocation of dishonest users is not an easy problem. This paper proposes a new way to manage revocation of pseudonyms in vehicular ad-hoc networks when using identity-based authentication to increase efficiency and security through certificateless authentication. In order to improve the performance of revocation lists, this paper proposes the use of a data structure based on authenticated dynamic hash k-ary trees and the frequency with which revoked pseudonyms are consulted. The use of the knowledge about the frequency of consultation of revoked pseudonyms allows an easier access to the most popular revoked pseudonyms to the detriment of revoked pseudonyms that are the least consulted. Accordingly, the proposal is especially useful in urban environments where there are vehicles that spend more time on road than others, such as public service vehicles.
1902.06422
Hirofumi Tsuda
Hirofumi Tsuda
Optimal Sequence and Performance for Desired User in Asynchronous CDMA System
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider asynchronous CDMA systems in no-fading environments with a particular focus on a certain user. This certain user is called a desired user in this paper. In such a situation, an optimal sequence, maximum Signal-to-Interference plus Noise Ratio (SINR) and the maximum capacity for a desired user are derived with other spreading sequences being given and fixed. In addition, the maximum SINR and the optimal sequence for a desired user are written in terms of the minimum eigenvalue and the corresponding eigenvector of a matrix, respectively. Since it is not straightforward to obtain an explicit form of the maximum SINR, we evaluate SINR and obtain the lower and upper bounds of the maximum SINR. From these bounds, the maximum SINR may get larger as the quantities written in terms of quadratic forms of other spreading sequences decrease. Further, we propose a method to obtain spreading sequences for all the users which achieve large SINRs. The performance of our proposed method is numerically verified.
[ { "created": "Mon, 18 Feb 2019 06:48:50 GMT", "version": "v1" }, { "created": "Sun, 24 Mar 2019 06:01:47 GMT", "version": "v2" }, { "created": "Wed, 24 Apr 2019 13:10:41 GMT", "version": "v3" }, { "created": "Thu, 6 Jun 2019 13:57:17 GMT", "version": "v4" }, { "created": "Sun, 9 Jun 2019 09:19:26 GMT", "version": "v5" } ]
2019-06-11
[ [ "Tsuda", "Hirofumi", "" ] ]
We consider asynchronous CDMA systems in no-fading environments with a particular focus on a certain user. This certain user is called a desired user in this paper. In such a situation, an optimal sequence, maximum Signal-to-Interference plus Noise Ratio (SINR) and the maximum capacity for a desired user are derived with other spreading sequences being given and fixed. In addition, the maximum SINR and the optimal sequence for a desired user are written in terms of the minimum eigenvalue and the corresponding eigenvector of a matrix, respectively. Since it is not straightforward to obtain an explicit form of the maximum SINR, we evaluate SINR and obtain the lower and upper bounds of the maximum SINR. From these bounds, the maximum SINR may get larger as the quantities written in terms of quadratic forms of other spreading sequences decrease. Further, we propose a method to obtain spreading sequences for all the users which achieve large SINRs. The performance of our proposed method is numerically verified.
1604.02694
Hao Fu
Hao Fu, Xing Xie, Yong Rui, Defu Lian, Guangzhong Sun, Enhong Chen
Predicting Social Status via Social Networks: A Case Study on University, Occupation, and Region
null
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social status refers to the relative position within the society. It is an important notion in sociology and related research. The problem of measuring social status has been studied for many years. Various indicators are proposed to assess social status of individuals, including educational attainment, occupation, and income/wealth. However, these indicators are sometimes difficult to collect or measure. We investigate social networks for alternative measures of social status. Online activities expose certain traits of users in the real world. We are interested in how these activities are related to social status, and how social status can be predicted with social network data. To the best of our knowledge, this is the first study on connecting online activities with social status in reality. In particular, we focus on the network structure of microblogs in this study. A user following another implies some kind of status. We cast the predicted social status of users to the "status" of real-world entities, e.g., universities, occupations, and regions, so that we can compare and validate predicted results with facts in the real world. We propose an efficient algorithm for this task and evaluate it on a dataset consisting of 3.4 million users from Sina Weibo. The result shows that it is possible to predict social status with reasonable accuracy using social network data. We also point out challenges and limitations of this approach, e.g., inconsistence between online popularity and real-world status for certain users. Our findings provide insights on analyzing online social status and future designs of ranking schemes for social networks.
[ { "created": "Sun, 10 Apr 2016 14:21:29 GMT", "version": "v1" } ]
2016-04-12
[ [ "Fu", "Hao", "" ], [ "Xie", "Xing", "" ], [ "Rui", "Yong", "" ], [ "Lian", "Defu", "" ], [ "Sun", "Guangzhong", "" ], [ "Chen", "Enhong", "" ] ]
Social status refers to the relative position within the society. It is an important notion in sociology and related research. The problem of measuring social status has been studied for many years. Various indicators are proposed to assess social status of individuals, including educational attainment, occupation, and income/wealth. However, these indicators are sometimes difficult to collect or measure. We investigate social networks for alternative measures of social status. Online activities expose certain traits of users in the real world. We are interested in how these activities are related to social status, and how social status can be predicted with social network data. To the best of our knowledge, this is the first study on connecting online activities with social status in reality. In particular, we focus on the network structure of microblogs in this study. A user following another implies some kind of status. We cast the predicted social status of users to the "status" of real-world entities, e.g., universities, occupations, and regions, so that we can compare and validate predicted results with facts in the real world. We propose an efficient algorithm for this task and evaluate it on a dataset consisting of 3.4 million users from Sina Weibo. The result shows that it is possible to predict social status with reasonable accuracy using social network data. We also point out challenges and limitations of this approach, e.g., inconsistence between online popularity and real-world status for certain users. Our findings provide insights on analyzing online social status and future designs of ranking schemes for social networks.
2010.03738
Yang Deng
Yang Deng, Wenxuan Zhang, Wai Lam
Multi-hop Inference for Question-driven Summarization
Accepted by EMNLP 2020 (main conference, long paper)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question-driven summarization has been recently studied as an effective approach to summarizing the source document to produce concise but informative answers for non-factoid questions. In this work, we propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG), to incorporate multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries. Specifically, we jointly model the relevance to the question and the interrelation among different sentences via a human-like multi-hop inference module, which captures important sentences for justifying the summarized answer. A gated selective pointer generator network with a multi-view coverage mechanism is designed to integrate diverse information from different perspectives. Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets, namely WikiHow and PubMedQA.
[ { "created": "Thu, 8 Oct 2020 02:36:39 GMT", "version": "v1" } ]
2020-10-09
[ [ "Deng", "Yang", "" ], [ "Zhang", "Wenxuan", "" ], [ "Lam", "Wai", "" ] ]
Question-driven summarization has been recently studied as an effective approach to summarizing the source document to produce concise but informative answers for non-factoid questions. In this work, we propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG), to incorporate multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries. Specifically, we jointly model the relevance to the question and the interrelation among different sentences via a human-like multi-hop inference module, which captures important sentences for justifying the summarized answer. A gated selective pointer generator network with a multi-view coverage mechanism is designed to integrate diverse information from different perspectives. Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets, namely WikiHow and PubMedQA.
1105.1014
Maurizio Martina
Maurizio Martina and Guido Masera
Improving Network-on-Chip-based turbo decoder architectures
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work novel results concerning Network-on-Chip-based turbo decoder architectures are presented. Stemming from previous publications, this work concentrates first on improving the throughput by exploiting adaptive-bandwidth reduction techniques. This technique shows in the best case an improvement of more than 60 Mb/s. Moreover, it is known that double-binary turbo decoders require higher area than binary ones. This characteristic has the negative effect of increasing the data width of the network nodes. Thus, the second contribution of this work is to reduce the network complexity to support doublebinary codes, by exploiting bit-level and pseudo-floating-point representation of the extrinsic information. These two techniques allow for an area reduction of up to more than the 40% with a performance degradation of about 0.2 dB.
[ { "created": "Thu, 5 May 2011 08:41:43 GMT", "version": "v1" } ]
2011-05-06
[ [ "Martina", "Maurizio", "" ], [ "Masera", "Guido", "" ] ]
In this work novel results concerning Network-on-Chip-based turbo decoder architectures are presented. Stemming from previous publications, this work concentrates first on improving the throughput by exploiting adaptive-bandwidth reduction techniques. This technique shows in the best case an improvement of more than 60 Mb/s. Moreover, it is known that double-binary turbo decoders require higher area than binary ones. This characteristic has the negative effect of increasing the data width of the network nodes. Thus, the second contribution of this work is to reduce the network complexity to support doublebinary codes, by exploiting bit-level and pseudo-floating-point representation of the extrinsic information. These two techniques allow for an area reduction of up to more than the 40% with a performance degradation of about 0.2 dB.
2011.03894
Christoforos Mavrogiannis
Junha Roh, Christoforos Mavrogiannis, Rishabh Madan, Dieter Fox, Siddhartha S. Srinivasa
Multimodal Trajectory Prediction via Topological Invariance for Navigation at Uncontrolled Intersections
Preprint of a paper with the same title, accepted to the Conference on Robot Learning 2020
null
null
null
cs.RO cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on decentralized navigation among multiple non-communicating rational agents at \emph{uncontrolled} intersections, i.e., street intersections without traffic signs or signals. Avoiding collisions in such domains relies on the ability of agents to predict each others' intentions reliably, and react quickly. Multiagent trajectory prediction is NP-hard whereas the sample complexity of existing data-driven approaches limits their applicability. Our key insight is that the geometric structure of the intersection and the incentive of agents to move efficiently and avoid collisions (rationality) reduces the space of likely behaviors, effectively relaxing the problem of trajectory prediction. In this paper, we collapse the space of multiagent trajectories at an intersection into a set of modes representing different classes of multiagent behavior, formalized using a notion of topological invariance. Based on this formalism, we design Multiple Topologies Prediction (MTP), a data-driven trajectory-prediction mechanism that reconstructs trajectory representations of high-likelihood modes in multiagent intersection scenes. We show that MTP outperforms a state-of-the-art multimodal trajectory prediction baseline (MFP) in terms of prediction accuracy by 78.24% on a challenging simulated dataset. Finally, we show that MTP enables our optimization-based planner, MTPnav, to achieve collision-free and time-efficient navigation across a variety of challenging intersection scenarios on the CARLA simulator.
[ { "created": "Sun, 8 Nov 2020 02:56:42 GMT", "version": "v1" } ]
2020-11-10
[ [ "Roh", "Junha", "" ], [ "Mavrogiannis", "Christoforos", "" ], [ "Madan", "Rishabh", "" ], [ "Fox", "Dieter", "" ], [ "Srinivasa", "Siddhartha S.", "" ] ]
We focus on decentralized navigation among multiple non-communicating rational agents at \emph{uncontrolled} intersections, i.e., street intersections without traffic signs or signals. Avoiding collisions in such domains relies on the ability of agents to predict each others' intentions reliably, and react quickly. Multiagent trajectory prediction is NP-hard whereas the sample complexity of existing data-driven approaches limits their applicability. Our key insight is that the geometric structure of the intersection and the incentive of agents to move efficiently and avoid collisions (rationality) reduces the space of likely behaviors, effectively relaxing the problem of trajectory prediction. In this paper, we collapse the space of multiagent trajectories at an intersection into a set of modes representing different classes of multiagent behavior, formalized using a notion of topological invariance. Based on this formalism, we design Multiple Topologies Prediction (MTP), a data-driven trajectory-prediction mechanism that reconstructs trajectory representations of high-likelihood modes in multiagent intersection scenes. We show that MTP outperforms a state-of-the-art multimodal trajectory prediction baseline (MFP) in terms of prediction accuracy by 78.24% on a challenging simulated dataset. Finally, we show that MTP enables our optimization-based planner, MTPnav, to achieve collision-free and time-efficient navigation across a variety of challenging intersection scenarios on the CARLA simulator.
2002.00556
Jeong-Hyun Cho
Jeong-Hyun Cho, Ji-Hoon Jeong, Dong-Joo Kim, and Seong-Whan Lee
A novel approach to classify natural grasp actions by estimating muscle activity patterns from EEG signals
4 pages, 4 figures, conference
null
null
null
cs.HC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing electroencephalogram (EEG) based brain-computer interface (BCI) systems is challenging. In this study, we analyzed natural grasp actions from EEG. Ten healthy subjects participated in this experiment. They executed and imagined three sustained grasp actions. We proposed a novel approach which estimates muscle activity patterns from EEG signals to improve the overall classification accuracy. For implementation, we have recorded EEG and electromyogram (EMG) simultaneously. Using the similarity of the estimated pattern from EEG signals compare to the activity pattern from EMG signals showed higher classification accuracy than competitive methods. As a result, we obtained the average classification accuracy of 63.89($\pm$7.54)% for actual movement and 46.96($\pm$15.30)% for motor imagery. These are 21.59% and 5.66% higher than the result of the competitive model, respectively. This result is encouraging, and the proposed method could potentially be used in future applications, such as a BCI-driven robot control for handling various daily use objects.
[ { "created": "Mon, 3 Feb 2020 04:40:17 GMT", "version": "v1" } ]
2020-02-06
[ [ "Cho", "Jeong-Hyun", "" ], [ "Jeong", "Ji-Hoon", "" ], [ "Kim", "Dong-Joo", "" ], [ "Lee", "Seong-Whan", "" ] ]
Developing electroencephalogram (EEG) based brain-computer interface (BCI) systems is challenging. In this study, we analyzed natural grasp actions from EEG. Ten healthy subjects participated in this experiment. They executed and imagined three sustained grasp actions. We proposed a novel approach which estimates muscle activity patterns from EEG signals to improve the overall classification accuracy. For implementation, we have recorded EEG and electromyogram (EMG) simultaneously. Using the similarity of the estimated pattern from EEG signals compare to the activity pattern from EMG signals showed higher classification accuracy than competitive methods. As a result, we obtained the average classification accuracy of 63.89($\pm$7.54)% for actual movement and 46.96($\pm$15.30)% for motor imagery. These are 21.59% and 5.66% higher than the result of the competitive model, respectively. This result is encouraging, and the proposed method could potentially be used in future applications, such as a BCI-driven robot control for handling various daily use objects.
2407.10302
Oshani Seneviratne
Oshani Seneviratne
The Feasibility of a Smart Contract "Kill Switch"
null
null
null
null
cs.CR cs.ET
http://creativecommons.org/licenses/by/4.0/
The advent of blockchain technology and its adoption across various sectors have raised critical discussions about the need for regulatory mechanisms to ensure consumer protection, maintain financial stability, and address privacy concerns without compromising the foundational principles of decentralization and immutability inherent in blockchain platforms. We examine the existing mechanisms for smart contract termination across several major blockchain platforms, including Ethereum, BNB Smart Chain, Cardano, Solana, Hyperledger Fabric, Corda, IOTA, Apotos, and Sui. We assess the compatibility of these mechanisms with the requirements of the EU Data Act, focusing on aspects such as consumer protection, error correction, and regulatory compliance. Our analysis reveals a diverse landscape of approaches, from immutable smart contracts with built-in termination conditions to upgradable smart contracts that allow for post-deployment modifications. We discuss the challenges associated with implementing the so-called smart contract "kill switches," such as the balance between enabling regulatory compliance and preserving the decentralized ethos, the technical feasibility of such mechanisms, and the implications for security and trust in the ecosystem.
[ { "created": "Sun, 14 Jul 2024 19:31:15 GMT", "version": "v1" } ]
2024-07-16
[ [ "Seneviratne", "Oshani", "" ] ]
The advent of blockchain technology and its adoption across various sectors have raised critical discussions about the need for regulatory mechanisms to ensure consumer protection, maintain financial stability, and address privacy concerns without compromising the foundational principles of decentralization and immutability inherent in blockchain platforms. We examine the existing mechanisms for smart contract termination across several major blockchain platforms, including Ethereum, BNB Smart Chain, Cardano, Solana, Hyperledger Fabric, Corda, IOTA, Apotos, and Sui. We assess the compatibility of these mechanisms with the requirements of the EU Data Act, focusing on aspects such as consumer protection, error correction, and regulatory compliance. Our analysis reveals a diverse landscape of approaches, from immutable smart contracts with built-in termination conditions to upgradable smart contracts that allow for post-deployment modifications. We discuss the challenges associated with implementing the so-called smart contract "kill switches," such as the balance between enabling regulatory compliance and preserving the decentralized ethos, the technical feasibility of such mechanisms, and the implications for security and trust in the ecosystem.
1504.03754
Milad Mahdian
Milad Mahdian, Edmund Yeh
Throughput and Delay Scaling of Content-Centric Ad Hoc and Heterogeneous Wireless Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the throughput and delay characteristics of wireless caching networks, where users are mainly interested in retrieving content stored in the network, rather than in maintaining source-destination communication. Nodes are assumed to be uniformly distributed in the network area. Each node has a limited-capacity content store, which it uses to cache contents. We propose an achievable caching and transmission scheme whereby requesters retrieve content from the caching point which is closest in Euclidean distance. We establish the throughput and delay scaling of the achievable scheme, and show that the throughput and delay performance are order-optimal within a class of schemes. We then solve the caching optimization problem, and evaluate the network performance for a Zipf content popularity distribution, letting the number of content types and the network size both go to infinity. Finally, we extend our analysis to heterogeneous wireless networks where, in addition to wireless nodes, there are a number of base stations uniformly distributed at random in the network area. We show that in order to achieve a better performance in a heterogeneous network in the order sense, the number of base stations needs to be greater than the ratio of the number of nodes to the number of content types. Furthermore, we show that the heterogeneous network does not yield performance advantages in the order sense if the Zipf content popularity distribution exponent exceeds 3/2.
[ { "created": "Wed, 15 Apr 2015 01:09:11 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2016 00:37:33 GMT", "version": "v2" }, { "created": "Mon, 25 Apr 2016 14:30:06 GMT", "version": "v3" }, { "created": "Sat, 22 Apr 2017 23:59:49 GMT", "version": "v4" } ]
2017-04-25
[ [ "Mahdian", "Milad", "" ], [ "Yeh", "Edmund", "" ] ]
We study the throughput and delay characteristics of wireless caching networks, where users are mainly interested in retrieving content stored in the network, rather than in maintaining source-destination communication. Nodes are assumed to be uniformly distributed in the network area. Each node has a limited-capacity content store, which it uses to cache contents. We propose an achievable caching and transmission scheme whereby requesters retrieve content from the caching point which is closest in Euclidean distance. We establish the throughput and delay scaling of the achievable scheme, and show that the throughput and delay performance are order-optimal within a class of schemes. We then solve the caching optimization problem, and evaluate the network performance for a Zipf content popularity distribution, letting the number of content types and the network size both go to infinity. Finally, we extend our analysis to heterogeneous wireless networks where, in addition to wireless nodes, there are a number of base stations uniformly distributed at random in the network area. We show that in order to achieve a better performance in a heterogeneous network in the order sense, the number of base stations needs to be greater than the ratio of the number of nodes to the number of content types. Furthermore, we show that the heterogeneous network does not yield performance advantages in the order sense if the Zipf content popularity distribution exponent exceeds 3/2.