id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2304.14268
Colin Cleveland Mr
Colin Cleveland, Chin-Yen Lee, Shen-Fu Tsai, Wei-Hsuan Yu, Hsuan-Wei Lee
Graphlet and Orbit Computation on Heterogeneous Graphs
13 pages, 7 figures
null
null
null
cs.SI physics.data-an
http://creativecommons.org/licenses/by/4.0/
Many applications, ranging from natural to social sciences, rely on graphlet analysis for the intuitive and meaningful characterization of networks employing micro-level structures as building blocks. However, it has not been thoroughly explored in heterogeneous graphs, which comprise various types of nodes and edges. Finding graphlets and orbits for heterogeneous graphs is difficult because of the heterogeneity and abundance of semantic information. We consider heterogeneous graphs, which can be treated as colored graphs. By applying the canonical label technique, we determine the graph isomorphism problem with multiple states on nodes and edges. With minimal parameters, we build all non-isomorphic graphs and associated orbits. We provide a Python package that can be used to generate orbits for colored directed graphs and determine the frequency of orbit occurrence. Finally, we provide four examples to illustrate the use of the Python package.
[ { "created": "Wed, 26 Apr 2023 13:16:22 GMT", "version": "v1" }, { "created": "Fri, 28 Apr 2023 20:16:51 GMT", "version": "v2" }, { "created": "Mon, 5 Jun 2023 13:52:26 GMT", "version": "v3" } ]
2023-06-06
[ [ "Cleveland", "Colin", "" ], [ "Lee", "Chin-Yen", "" ], [ "Tsai", "Shen-Fu", "" ], [ "Yu", "Wei-Hsuan", "" ], [ "Lee", "Hsuan-Wei", "" ] ]
Many applications, ranging from natural to social sciences, rely on graphlet analysis for the intuitive and meaningful characterization of networks employing micro-level structures as building blocks. However, it has not been thoroughly explored in heterogeneous graphs, which comprise various types of nodes and edges. Finding graphlets and orbits for heterogeneous graphs is difficult because of the heterogeneity and abundance of semantic information. We consider heterogeneous graphs, which can be treated as colored graphs. By applying the canonical label technique, we determine the graph isomorphism problem with multiple states on nodes and edges. With minimal parameters, we build all non-isomorphic graphs and associated orbits. We provide a Python package that can be used to generate orbits for colored directed graphs and determine the frequency of orbit occurrence. Finally, we provide four examples to illustrate the use of the Python package.
2208.10265
Soumyabrata Dev
Jiantao Wu, Fabrizio Orlandi, Tarek AlSkaif, Declan O'Sullivan, and Soumyabrata Dev
A semantic web approach to uplift decentralized household energy data
Published in Sustainable Energy, Grids and Networks (SEGAN) 2022
null
null
null
cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a decentralized household energy system comprised of various devices such as home appliances, electric vehicles, and solar panels, end-users are able to dig deeper into the system's details and further achieve energy sustainability if they are presented with data on the electric energy consumption and production at the granularity of the device. However, many databases in this field are siloed from other domains, including solely information pertaining to energy. This may result in the loss of information (e.g. weather) on each device's energy use. Meanwhile, a large number of these datasets have been extensively used in computational modeling techniques such as machine learning models. While such computational approaches achieve great accuracy and performance by concentrating only on a local view of datasets, model reliability cannot be guaranteed since such models are very vulnerable to data input fluctuations when information omission is taken into account. This article tackles the data isolation issue in the field of smart energy systems by examining Semantic Web methods on top of a household energy system. We offer an ontology-based approach for managing decentralized data at the device-level resolution in a system. As a consequence, the scope of the data associated with each device may easily be expanded in an interoperable manner throughout the Web, and additional information, such as weather, can be obtained from the Web, provided that the data is organized according to W3C standards.
[ { "created": "Thu, 18 Aug 2022 17:21:18 GMT", "version": "v1" }, { "created": "Fri, 26 Aug 2022 22:48:54 GMT", "version": "v2" } ]
2022-08-30
[ [ "Wu", "Jiantao", "" ], [ "Orlandi", "Fabrizio", "" ], [ "AlSkaif", "Tarek", "" ], [ "O'Sullivan", "Declan", "" ], [ "Dev", "Soumyabrata", "" ] ]
In a decentralized household energy system comprised of various devices such as home appliances, electric vehicles, and solar panels, end-users are able to dig deeper into the system's details and further achieve energy sustainability if they are presented with data on the electric energy consumption and production at the granularity of the device. However, many databases in this field are siloed from other domains, including solely information pertaining to energy. This may result in the loss of information (e.g. weather) on each device's energy use. Meanwhile, a large number of these datasets have been extensively used in computational modeling techniques such as machine learning models. While such computational approaches achieve great accuracy and performance by concentrating only on a local view of datasets, model reliability cannot be guaranteed since such models are very vulnerable to data input fluctuations when information omission is taken into account. This article tackles the data isolation issue in the field of smart energy systems by examining Semantic Web methods on top of a household energy system. We offer an ontology-based approach for managing decentralized data at the device-level resolution in a system. As a consequence, the scope of the data associated with each device may easily be expanded in an interoperable manner throughout the Web, and additional information, such as weather, can be obtained from the Web, provided that the data is organized according to W3C standards.
2006.09736
Casper Hansen
Christian Hansen and Casper Hansen and Jakob Grue Simonsen and Birger Larsen and Stephen Alstrup and Christina Lioma
Factuality Checking in News Headlines with Eye Tracking
Accepted to SIGIR 2020
null
10.1145/3397271.3401221
null
cs.HC cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study whether it is possible to infer if a news headline is true or false using only the movement of the human eyes when reading news headlines. Our study with 55 participants who are eye-tracked when reading 108 news headlines (72 true, 36 false) shows that false headlines receive statistically significantly less visual attention than true headlines. We further build an ensemble learner that predicts news headline factuality using only eye-tracking measurements. Our model yields a mean AUC of 0.688 and is better at detecting false than true headlines. Through a model analysis, we find that eye-tracking 25 users when reading 3-6 headlines is sufficient for our ensemble learner.
[ { "created": "Wed, 17 Jun 2020 09:24:21 GMT", "version": "v1" } ]
2020-06-18
[ [ "Hansen", "Christian", "" ], [ "Hansen", "Casper", "" ], [ "Simonsen", "Jakob Grue", "" ], [ "Larsen", "Birger", "" ], [ "Alstrup", "Stephen", "" ], [ "Lioma", "Christina", "" ] ]
We study whether it is possible to infer if a news headline is true or false using only the movement of the human eyes when reading news headlines. Our study with 55 participants who are eye-tracked when reading 108 news headlines (72 true, 36 false) shows that false headlines receive statistically significantly less visual attention than true headlines. We further build an ensemble learner that predicts news headline factuality using only eye-tracking measurements. Our model yields a mean AUC of 0.688 and is better at detecting false than true headlines. Through a model analysis, we find that eye-tracking 25 users when reading 3-6 headlines is sufficient for our ensemble learner.
2306.02648
Cosijopii Garc\'ia-Garc\'ia
Cosijopii Garcia-Garcia and Alicia Morales-Reyes and Hugo Jair Escalante
Continuous Cartesian Genetic Programming based representation for Multi-Objective Neural Architecture Search
null
null
null
null
cs.NE cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose a novel approach for the challenge of designing less complex yet highly effective convolutional neural networks (CNNs) through the use of cartesian genetic programming (CGP) for neural architecture search (NAS). Our approach combines real-based and block-chained CNNs representations based on CGP for optimization in the continuous domain using multi-objective evolutionary algorithms (MOEAs). Two variants are introduced that differ in the granularity of the search space they consider. The proposed CGP-NASV1 and CGP-NASV2 algorithms were evaluated using the non-dominated sorting genetic algorithm II (NSGA-II) on the CIFAR-10 and CIFAR-100 datasets. The empirical analysis was extended to assess the crossover operator from differential evolution (DE), the multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S metric selection evolutionary multi-objective algorithm (SMS-EMOA) using the same representation. Experimental results demonstrate that our approach is competitive with state-of-the-art proposals in terms of classification performance and model complexity.
[ { "created": "Mon, 5 Jun 2023 07:32:47 GMT", "version": "v1" } ]
2023-06-06
[ [ "Garcia-Garcia", "Cosijopii", "" ], [ "Morales-Reyes", "Alicia", "" ], [ "Escalante", "Hugo Jair", "" ] ]
We propose a novel approach for the challenge of designing less complex yet highly effective convolutional neural networks (CNNs) through the use of cartesian genetic programming (CGP) for neural architecture search (NAS). Our approach combines real-based and block-chained CNNs representations based on CGP for optimization in the continuous domain using multi-objective evolutionary algorithms (MOEAs). Two variants are introduced that differ in the granularity of the search space they consider. The proposed CGP-NASV1 and CGP-NASV2 algorithms were evaluated using the non-dominated sorting genetic algorithm II (NSGA-II) on the CIFAR-10 and CIFAR-100 datasets. The empirical analysis was extended to assess the crossover operator from differential evolution (DE), the multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S metric selection evolutionary multi-objective algorithm (SMS-EMOA) using the same representation. Experimental results demonstrate that our approach is competitive with state-of-the-art proposals in terms of classification performance and model complexity.
2011.11095
Gevorg Yeghikyan
Gevorg Yeghikyan
How will AI and automation transform society and cities?
8 pages, 1 figure
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Against the backdrop of rising anxiety and discussions on the impact of AI on society, I explore in this article the structural possibilities of AI and automation triggering a new social conflict between the current capitalist elites and the emerging "creative class" (R&D scientists, engineers, business developers, etc.), and how this conflict can produce social tensions and transform urban space. By drawing insights from a structurally similar conflict in 17-18th century Europe between the aristocracy and the emerging bourgeoisie, the impact of this conflict on the social, spatial, and power landscapes in cities of that time, as well as current trends in urban geography, this article outlines the prospects of urban transformations under changing production and consumption economies.
[ { "created": "Sun, 22 Nov 2020 19:44:51 GMT", "version": "v1" } ]
2020-11-24
[ [ "Yeghikyan", "Gevorg", "" ] ]
Against the backdrop of rising anxiety and discussions on the impact of AI on society, I explore in this article the structural possibilities of AI and automation triggering a new social conflict between the current capitalist elites and the emerging "creative class" (R&D scientists, engineers, business developers, etc.), and how this conflict can produce social tensions and transform urban space. By drawing insights from a structurally similar conflict in 17-18th century Europe between the aristocracy and the emerging bourgeoisie, the impact of this conflict on the social, spatial, and power landscapes in cities of that time, as well as current trends in urban geography, this article outlines the prospects of urban transformations under changing production and consumption economies.
2101.03477
Peter Washington
Peter Washington, Onur Cezmi Mutlu, Emilie Leblanc, Aaron Kline, Cathy Hou, Brianna Chrisman, Nate Stockham, Kelley Paskov, Catalin Voss, Nick Haber, Dennis Wall
Training Affective Computer Vision Models by Crowdsourcing Soft-Target Labels
null
null
null
null
cs.CV cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Emotion classifiers traditionally predict discrete emotions. However, emotion expressions are often subjective, thus requiring a method to handle subjective labels. We explore the use of crowdsourcing to acquire reliable soft-target labels and evaluate an emotion detection classifier trained with these labels. We center our study on the Child Affective Facial Expression (CAFE) dataset, a gold standard collection of images depicting pediatric facial expressions along with 100 human labels per image. To test the feasibility of crowdsourcing to generate these labels, we used Microworkers to acquire labels for 207 CAFE images. We evaluate both unfiltered workers as well as workers selected through a short crowd filtration process. We then train two versions of a classifiers on soft-target CAFE labels using the original 100 annotations provided with the dataset: (1) a classifier trained with traditional one-hot encoded labels, and (2) a classifier trained with vector labels representing the distribution of CAFE annotator responses. We compare the resulting softmax output distributions of the two classifiers with a 2-sample independent t-test of L1 distances between the classifier's output probability distribution and the distribution of human labels. While agreement with CAFE is weak for unfiltered crowd workers, the filtered crowd agree with the CAFE labels 100% of the time for many emotions. While the F1-score for a one-hot encoded classifier is much higher (94.33% vs. 78.68%) with respect to the ground truth CAFE labels, the output probability vector of the crowd-trained classifier more closely resembles the distribution of human labels (t=3.2827, p=0.0014). Reporting an emotion probability distribution that accounts for the subjectivity of human interpretation. Crowdsourcing, including a sufficient filtering mechanism, is a feasible solution for acquiring soft-target labels.
[ { "created": "Sun, 10 Jan 2021 05:26:55 GMT", "version": "v1" }, { "created": "Wed, 22 Sep 2021 23:12:50 GMT", "version": "v2" } ]
2021-09-24
[ [ "Washington", "Peter", "" ], [ "Mutlu", "Onur Cezmi", "" ], [ "Leblanc", "Emilie", "" ], [ "Kline", "Aaron", "" ], [ "Hou", "Cathy", "" ], [ "Chrisman", "Brianna", "" ], [ "Stockham", "Nate", "" ], [ "Paskov", "Kelley", "" ], [ "Voss", "Catalin", "" ], [ "Haber", "Nick", "" ], [ "Wall", "Dennis", "" ] ]
Emotion classifiers traditionally predict discrete emotions. However, emotion expressions are often subjective, thus requiring a method to handle subjective labels. We explore the use of crowdsourcing to acquire reliable soft-target labels and evaluate an emotion detection classifier trained with these labels. We center our study on the Child Affective Facial Expression (CAFE) dataset, a gold standard collection of images depicting pediatric facial expressions along with 100 human labels per image. To test the feasibility of crowdsourcing to generate these labels, we used Microworkers to acquire labels for 207 CAFE images. We evaluate both unfiltered workers as well as workers selected through a short crowd filtration process. We then train two versions of a classifiers on soft-target CAFE labels using the original 100 annotations provided with the dataset: (1) a classifier trained with traditional one-hot encoded labels, and (2) a classifier trained with vector labels representing the distribution of CAFE annotator responses. We compare the resulting softmax output distributions of the two classifiers with a 2-sample independent t-test of L1 distances between the classifier's output probability distribution and the distribution of human labels. While agreement with CAFE is weak for unfiltered crowd workers, the filtered crowd agree with the CAFE labels 100% of the time for many emotions. While the F1-score for a one-hot encoded classifier is much higher (94.33% vs. 78.68%) with respect to the ground truth CAFE labels, the output probability vector of the crowd-trained classifier more closely resembles the distribution of human labels (t=3.2827, p=0.0014). Reporting an emotion probability distribution that accounts for the subjectivity of human interpretation. Crowdsourcing, including a sufficient filtering mechanism, is a feasible solution for acquiring soft-target labels.
1412.2342
Hayaru Shouno
Hayaru Shouno
Bayesian Image Restoration for Poisson Corrupted Image using a Latent Variational Method with Gaussian MRF
9 pages, 6 figures, The of this manuscript is submitting to the Information Processing Society of Japan(IPSJ), Transactions on Mathematical Modeling and its Applications (TOM)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We treat an image restoration problem with a Poisson noise chan- nel using a Bayesian framework. The Poisson randomness might be appeared in observation of low contrast object in the field of imaging. The noise observation is often hard to treat in a theo- retical analysis. In our formulation, we interpret the observation through the Poisson noise channel as a likelihood, and evaluate the bound of it with a Gaussian function using a latent variable method. We then introduce a Gaussian Markov random field (GMRF) as the prior for the Bayesian approach, and derive the posterior as a Gaussian distribution. The latent parameters in the likelihood and the hyperparameter in the GMRF prior could be treated as hid- den parameters, so that, we propose an algorithm to infer them in the expectation maximization (EM) framework using loopy belief propagation(LBP). We confirm the ability of our algorithm in the computer simulation, and compare it with the results of other im- age restoration frameworks.
[ { "created": "Sun, 7 Dec 2014 10:59:55 GMT", "version": "v1" } ]
2014-12-09
[ [ "Shouno", "Hayaru", "" ] ]
We treat an image restoration problem with a Poisson noise chan- nel using a Bayesian framework. The Poisson randomness might be appeared in observation of low contrast object in the field of imaging. The noise observation is often hard to treat in a theo- retical analysis. In our formulation, we interpret the observation through the Poisson noise channel as a likelihood, and evaluate the bound of it with a Gaussian function using a latent variable method. We then introduce a Gaussian Markov random field (GMRF) as the prior for the Bayesian approach, and derive the posterior as a Gaussian distribution. The latent parameters in the likelihood and the hyperparameter in the GMRF prior could be treated as hid- den parameters, so that, we propose an algorithm to infer them in the expectation maximization (EM) framework using loopy belief propagation(LBP). We confirm the ability of our algorithm in the computer simulation, and compare it with the results of other im- age restoration frameworks.
2311.00278
Min Jae Jung
Min Jae Jung, Seung Dae Han and Joohee Kim
Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection
19 pages, 11 figures
null
10.1016/j.cviu.2024.103956
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Few-shot object detection, which focuses on detecting novel objects with few labels, is an emerging challenge in the community. Recent studies show that adapting a pre-trained model or modified loss function can improve performance. In this paper, we explore leveraging the power of Contrastive Language-Image Pre-training (CLIP) and hard negative classification loss in low data setting. Specifically, we propose Re-scoring using Image-language Similarity for Few-shot object detection (RISF) which extends Faster R-CNN by introducing Calibration Module using CLIP (CM-CLIP) and Background Negative Re-scale Loss (BNRL). The former adapts CLIP, which performs zero-shot classification, to re-score the classification scores of a detector using image-class similarities, the latter is modified classification loss considering the punishment for fake backgrounds as well as confusing categories on a generalized few-shot object detection dataset. Extensive experiments on MS-COCO and PASCAL VOC show that the proposed RISF substantially outperforms the state-of-the-art approaches. The code will be available.
[ { "created": "Wed, 1 Nov 2023 04:04:34 GMT", "version": "v1" } ]
2024-07-25
[ [ "Jung", "Min Jae", "" ], [ "Han", "Seung Dae", "" ], [ "Kim", "Joohee", "" ] ]
Few-shot object detection, which focuses on detecting novel objects with few labels, is an emerging challenge in the community. Recent studies show that adapting a pre-trained model or modified loss function can improve performance. In this paper, we explore leveraging the power of Contrastive Language-Image Pre-training (CLIP) and hard negative classification loss in low data setting. Specifically, we propose Re-scoring using Image-language Similarity for Few-shot object detection (RISF) which extends Faster R-CNN by introducing Calibration Module using CLIP (CM-CLIP) and Background Negative Re-scale Loss (BNRL). The former adapts CLIP, which performs zero-shot classification, to re-score the classification scores of a detector using image-class similarities, the latter is modified classification loss considering the punishment for fake backgrounds as well as confusing categories on a generalized few-shot object detection dataset. Extensive experiments on MS-COCO and PASCAL VOC show that the proposed RISF substantially outperforms the state-of-the-art approaches. The code will be available.
1509.02620
Dharmendra Dixit
Dharmendra Dixit and P. R. Sahu
Performance of QAM Schemes with Dual-Hop DF Relaying Systems over Mixed $\eta$-$\mu$ and $\kappa$-$\mu$ Fading Channels
25
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Performance of quadrature amplitude modulation (QAM) schemes is analyzed with dual-hop decode-and-forward (DF) relaying systems over mixed $\eta$-$\mu$ and $\kappa$-$\mu$ fading channels. Closed-form expressions are obtained for the average symbol error rate (ASER) for general order rectangular QAM and cross QAM schemes using moment generating function based approach. Derived expressions are in the form of Lauricella's $(F_D^{(n)}(\cdot), \Phi_1^{(n)}(\cdot))$ hypergeometric functions which can be numerically evaluated using either integral or series representation. The obtained ASER expressions include other mixed fading channel cases addressed in the literature as special cases such as mixed Hoyt, and Rice fading, mixed Nakagami-$m$, and Rice fading. We further obtain a simple expression for the asymptotic ASER, which is useful to determine a factor governing the system performance at high SNRs, i.e., the diversity order. Additionally, we analyze the optimal power allocation, which provides a practical design rule to optimally distribute the total transmission power between the source and the relay to minimize the ASER. Extensive numerical and computer simulation results are presented that confirm the accuracy of presented mathematical analysis.
[ { "created": "Wed, 9 Sep 2015 03:14:41 GMT", "version": "v1" } ]
2015-09-10
[ [ "Dixit", "Dharmendra", "" ], [ "Sahu", "P. R.", "" ] ]
Performance of quadrature amplitude modulation (QAM) schemes is analyzed with dual-hop decode-and-forward (DF) relaying systems over mixed $\eta$-$\mu$ and $\kappa$-$\mu$ fading channels. Closed-form expressions are obtained for the average symbol error rate (ASER) for general order rectangular QAM and cross QAM schemes using moment generating function based approach. Derived expressions are in the form of Lauricella's $(F_D^{(n)}(\cdot), \Phi_1^{(n)}(\cdot))$ hypergeometric functions which can be numerically evaluated using either integral or series representation. The obtained ASER expressions include other mixed fading channel cases addressed in the literature as special cases such as mixed Hoyt, and Rice fading, mixed Nakagami-$m$, and Rice fading. We further obtain a simple expression for the asymptotic ASER, which is useful to determine a factor governing the system performance at high SNRs, i.e., the diversity order. Additionally, we analyze the optimal power allocation, which provides a practical design rule to optimally distribute the total transmission power between the source and the relay to minimize the ASER. Extensive numerical and computer simulation results are presented that confirm the accuracy of presented mathematical analysis.
2205.00840
Volker Nannen
Volker Nannen and Damian Bover
Traction of Interlocking Spikes on a Granular Material
null
Earth and Space 2022, 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
10.1061/9780784484470.009
null
cs.RO cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interlock drive system generates traction by inserting narrow articulated spikes into the ground and by leveraging the soil's strength to resist horizontal draft forces. The system promises high tractive performance in low gravity environments where tires have little traction for lack of weight. At Earth and Space 2021 we reported the performance of such spikes on a silty clay loam, a cohesive soil. We found that in such soil, traction below a critical depth is provided by a zone of lateral soil failure. We also found that the articulation translates a horizontal draft force into a vertical penetration force strong enough to penetrate a narrow spike to a depth where the soil can sustain the draft force, in a self-regulating way. It is conceivable that a granular material like regolith or sand with little to no cohesive strength provides less vertical penetration resistance and less resistance to a horizontal draft force than a cohesive soil, which leads to the question of whether and how much tractive force an interlocking spike can generate on a granular material. Here we report on field trials that study different spike designs in dry and unsaturated moist sand. The results demonstrate that a loose granular material requires larger spikes than a cohesive soil, that these larger spikes penetrate dry and moist sand reliably, and that they promise good tractive efficiency. The trials indicate that on sand, a larger spike diameter can improve the pull/weight ratio without a loss of tractive performance.
[ { "created": "Sat, 2 Apr 2022 22:07:10 GMT", "version": "v1" }, { "created": "Fri, 21 Apr 2023 19:02:38 GMT", "version": "v2" } ]
2023-04-25
[ [ "Nannen", "Volker", "" ], [ "Bover", "Damian", "" ] ]
The interlock drive system generates traction by inserting narrow articulated spikes into the ground and by leveraging the soil's strength to resist horizontal draft forces. The system promises high tractive performance in low gravity environments where tires have little traction for lack of weight. At Earth and Space 2021 we reported the performance of such spikes on a silty clay loam, a cohesive soil. We found that in such soil, traction below a critical depth is provided by a zone of lateral soil failure. We also found that the articulation translates a horizontal draft force into a vertical penetration force strong enough to penetrate a narrow spike to a depth where the soil can sustain the draft force, in a self-regulating way. It is conceivable that a granular material like regolith or sand with little to no cohesive strength provides less vertical penetration resistance and less resistance to a horizontal draft force than a cohesive soil, which leads to the question of whether and how much tractive force an interlocking spike can generate on a granular material. Here we report on field trials that study different spike designs in dry and unsaturated moist sand. The results demonstrate that a loose granular material requires larger spikes than a cohesive soil, that these larger spikes penetrate dry and moist sand reliably, and that they promise good tractive efficiency. The trials indicate that on sand, a larger spike diameter can improve the pull/weight ratio without a loss of tractive performance.
1304.4326
Himanshu Chauhan
Himanshu Chauhan, Vijay K. Garg, Aravind Natarajan, Neeraj Mittal
Distributed Abstraction Algorithm for Online Predicate Detection
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing a distributed computation is a hard problem in general due to the combinatorial explosion in the size of the state-space with the number of processes in the system. By abstracting the computation, unnecessary explorations can be avoided. Computation slicing is an approach for abstracting dis- tributed computations with respect to a given predicate. We focus on regular predicates, a family of predicates that covers a large number of commonly used predicates for runtime verification. The existing algorithms for computation slicing are centralized in nature in which a single process is responsible for computing the slice in either offline or online manner. In this paper, we present a distributed online algorithm for computing the slice of a distributed computation with respect to a regular predicate. Our algorithm distributes the work and storage requirements across the system, thus reducing the space and computation complexities per process. In addition, for conjunctive predicates, our algorithm also reduces the message load per process.
[ { "created": "Tue, 16 Apr 2013 03:56:24 GMT", "version": "v1" }, { "created": "Fri, 31 May 2013 04:03:28 GMT", "version": "v2" }, { "created": "Tue, 4 Jun 2013 06:23:50 GMT", "version": "v3" } ]
2013-06-05
[ [ "Chauhan", "Himanshu", "" ], [ "Garg", "Vijay K.", "" ], [ "Natarajan", "Aravind", "" ], [ "Mittal", "Neeraj", "" ] ]
Analyzing a distributed computation is a hard problem in general due to the combinatorial explosion in the size of the state-space with the number of processes in the system. By abstracting the computation, unnecessary explorations can be avoided. Computation slicing is an approach for abstracting dis- tributed computations with respect to a given predicate. We focus on regular predicates, a family of predicates that covers a large number of commonly used predicates for runtime verification. The existing algorithms for computation slicing are centralized in nature in which a single process is responsible for computing the slice in either offline or online manner. In this paper, we present a distributed online algorithm for computing the slice of a distributed computation with respect to a regular predicate. Our algorithm distributes the work and storage requirements across the system, thus reducing the space and computation complexities per process. In addition, for conjunctive predicates, our algorithm also reduces the message load per process.
2011.00717
Hongyuan Mei
Hongyuan Mei, Tom Wan, Jason Eisner
Noise-Contrastive Estimation for Multivariate Point Processes
NeurIPS 2020 camera-ready
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation---a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time.
[ { "created": "Mon, 2 Nov 2020 04:09:33 GMT", "version": "v1" } ]
2020-11-03
[ [ "Mei", "Hongyuan", "" ], [ "Wan", "Tom", "" ], [ "Eisner", "Jason", "" ] ]
The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation---a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time.
1810.13320
Longyue Wang
Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu
Convolutional Self-Attention Network
The least version of this paper has been uploaded to another link: arXiv:1904.03107
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies. It can be further enhanced with multi-headed attention mechanism by allowing the model to jointly attend to information from different representation subspaces at different positions (Vaswani et al., 2017). In this work, we propose a novel convolutional self-attention network (CSAN), which offers SAN the abilities to 1) capture neighboring dependencies, and 2) model the interaction between multiple attention heads. Experimental results on WMT14 English-to-German translation task demonstrate that the proposed approach outperforms both the strong Transformer baseline and other existing works on enhancing the locality of SAN. Comparing with previous work, our model does not introduce any new parameters.
[ { "created": "Wed, 31 Oct 2018 14:58:30 GMT", "version": "v1" }, { "created": "Mon, 8 Apr 2019 09:15:30 GMT", "version": "v2" } ]
2019-04-09
[ [ "Yang", "Baosong", "" ], [ "Wang", "Longyue", "" ], [ "Wong", "Derek F.", "" ], [ "Chao", "Lidia S.", "" ], [ "Tu", "Zhaopeng", "" ] ]
Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies. It can be further enhanced with multi-headed attention mechanism by allowing the model to jointly attend to information from different representation subspaces at different positions (Vaswani et al., 2017). In this work, we propose a novel convolutional self-attention network (CSAN), which offers SAN the abilities to 1) capture neighboring dependencies, and 2) model the interaction between multiple attention heads. Experimental results on WMT14 English-to-German translation task demonstrate that the proposed approach outperforms both the strong Transformer baseline and other existing works on enhancing the locality of SAN. Comparing with previous work, our model does not introduce any new parameters.
1607.05671
Shankara Narayanan Krishna
S Akshay, Patricia Bouyer, Shankara Narayanan Krishna, Lakshmi Manasa, Ashutosh Trivedi
Stochastic Timed Games Revisited
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic timed games (STGs), introduced by Bouyer and Forejt, naturally generalize both continuous-time Markov chains and timed automata by providing a partition of the locations between those controlled by two players (Player Box and Player Diamond) with competing objectives and those governed by stochastic laws. Depending on the number of players---$2$, $1$, or $0$---subclasses of stochastic timed games are often classified as $2\frac{1}{2}$-player, $1\frac{1}{2}$-player, and $\frac{1}{2}$-player games where the $\frac{1}{2}$ symbolizes the presence of the stochastic "nature" player. For STGs with reachability objectives it is known that $1\frac{1}{2}$-player one-clock STGs are decidable for qualitative objectives, and that $2\frac{1}{2}$-player three-clock STGs are undecidable for quantitative reachability objectives. This paper further refines the gap in this decidability spectrum. We show that quantitative reachability objectives are already undecidable for $1\frac{1}{2}$ player four-clock STGs, and even under the time-bounded restriction for $2\frac{1}{2}$-player five-clock STGs. We also obtain a class of $1\frac{1}{2}$, $2\frac{1}{2}$ player STGs for which the quantitative reachability problem is decidable.
[ { "created": "Tue, 19 Jul 2016 17:27:14 GMT", "version": "v1" } ]
2016-07-20
[ [ "Akshay", "S", "" ], [ "Bouyer", "Patricia", "" ], [ "Krishna", "Shankara Narayanan", "" ], [ "Manasa", "Lakshmi", "" ], [ "Trivedi", "Ashutosh", "" ] ]
Stochastic timed games (STGs), introduced by Bouyer and Forejt, naturally generalize both continuous-time Markov chains and timed automata by providing a partition of the locations between those controlled by two players (Player Box and Player Diamond) with competing objectives and those governed by stochastic laws. Depending on the number of players---$2$, $1$, or $0$---subclasses of stochastic timed games are often classified as $2\frac{1}{2}$-player, $1\frac{1}{2}$-player, and $\frac{1}{2}$-player games where the $\frac{1}{2}$ symbolizes the presence of the stochastic "nature" player. For STGs with reachability objectives it is known that $1\frac{1}{2}$-player one-clock STGs are decidable for qualitative objectives, and that $2\frac{1}{2}$-player three-clock STGs are undecidable for quantitative reachability objectives. This paper further refines the gap in this decidability spectrum. We show that quantitative reachability objectives are already undecidable for $1\frac{1}{2}$ player four-clock STGs, and even under the time-bounded restriction for $2\frac{1}{2}$-player five-clock STGs. We also obtain a class of $1\frac{1}{2}$, $2\frac{1}{2}$ player STGs for which the quantitative reachability problem is decidable.
1805.11598
Phoebe Mulcaire
Phoebe Mulcaire, Swabha Swayamdipta, Noah Smith
Polyglot Semantic Role Labeling
To appear at ACL 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous approaches to multilingual semantic dependency parsing treat languages independently, without exploiting the similarities between semantic structures across languages. We experiment with a new approach where we combine resources from a pair of languages in the CoNLL 2009 shared task to build a polyglot semantic role labeler. Notwithstanding the absence of parallel data, and the dissimilarity in annotations between languages, our approach results in an improvement in SRL performance on multiple languages over a monolingual baseline. Analysis of the polyglot model shows it to be advantageous in lower-resource settings.
[ { "created": "Tue, 29 May 2018 17:29:55 GMT", "version": "v1" } ]
2018-05-30
[ [ "Mulcaire", "Phoebe", "" ], [ "Swayamdipta", "Swabha", "" ], [ "Smith", "Noah", "" ] ]
Previous approaches to multilingual semantic dependency parsing treat languages independently, without exploiting the similarities between semantic structures across languages. We experiment with a new approach where we combine resources from a pair of languages in the CoNLL 2009 shared task to build a polyglot semantic role labeler. Notwithstanding the absence of parallel data, and the dissimilarity in annotations between languages, our approach results in an improvement in SRL performance on multiple languages over a monolingual baseline. Analysis of the polyglot model shows it to be advantageous in lower-resource settings.
2104.12141
Abhinandan Nath
Abhinandan Nath
Coresets for $k$-median clustering under Fr\'{e}chet and Hausdorff distances
null
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
We give algorithms for computing coresets for $(1+\varepsilon)$-approximate $k$-median clustering of polygonal curves (under the discrete and continuous Fr\'{e}chet distance) and point sets (under the Hausdorff distance), when the cluster centers are restricted to be of low complexity. Ours is the first such result, where the size of the coreset is independent of the number of input curves/point sets to be clustered (although it still depends on the maximum complexity of each input object). Specifically, the size of the coreset is $\Theta\left(\frac{k^3lm^{\delta}d}{\varepsilon^2}\log\left( \frac{kl}{\varepsilon}\right)\right)$ for any $\delta > 0$, where $d$ is the ambient dimension, $m$ is the maximum number of points in an input curve/point set, and $l$ is the maximum number of points allowed in a cluster center. We formally characterize a general condition on the restricted space of cluster centers -- this helps us to generalize and apply the importance sampling framework, that was used by Langberg and Schulman for computing coresets for $k$-median clustering of $d$-dimensional points on normed spaces in $\mathbb{R}^d$, to the problem of clustering curves and point sets using the Fr\'{e}chet and Hausdorff metrics. Roughly, the condition places an upper bound on the number of different combinations of metric balls that the restricted space of cluster centers can hit. We also derive lower bounds on the size of the coreset, given the restriction that the coreset must be a subset of the input objects.
[ { "created": "Sun, 25 Apr 2021 12:27:05 GMT", "version": "v1" } ]
2021-04-27
[ [ "Nath", "Abhinandan", "" ] ]
We give algorithms for computing coresets for $(1+\varepsilon)$-approximate $k$-median clustering of polygonal curves (under the discrete and continuous Fr\'{e}chet distance) and point sets (under the Hausdorff distance), when the cluster centers are restricted to be of low complexity. Ours is the first such result, where the size of the coreset is independent of the number of input curves/point sets to be clustered (although it still depends on the maximum complexity of each input object). Specifically, the size of the coreset is $\Theta\left(\frac{k^3lm^{\delta}d}{\varepsilon^2}\log\left( \frac{kl}{\varepsilon}\right)\right)$ for any $\delta > 0$, where $d$ is the ambient dimension, $m$ is the maximum number of points in an input curve/point set, and $l$ is the maximum number of points allowed in a cluster center. We formally characterize a general condition on the restricted space of cluster centers -- this helps us to generalize and apply the importance sampling framework, that was used by Langberg and Schulman for computing coresets for $k$-median clustering of $d$-dimensional points on normed spaces in $\mathbb{R}^d$, to the problem of clustering curves and point sets using the Fr\'{e}chet and Hausdorff metrics. Roughly, the condition places an upper bound on the number of different combinations of metric balls that the restricted space of cluster centers can hit. We also derive lower bounds on the size of the coreset, given the restriction that the coreset must be a subset of the input objects.
2404.03164
Haonan Zhang
Haonan Zhang, Dongxia Wang, Zhu Sun, Yanhui Li, Youcheng Sun, Huizhi Liang, Wenhai Wang
Does Knowledge Graph Really Matter for Recommender Systems?
null
null
null
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems (RSs) are designed to provide personalized recommendations to users. Recently, knowledge graphs (KGs) have been widely introduced in RSs to improve recommendation accuracy. In this study, however, we demonstrate that RSs do not necessarily perform worse even if the KG is downgraded to the user-item interaction graph only (or removed). We propose an evaluation framework KG4RecEval to systematically evaluate how much a KG contributes to the recommendation accuracy of a KG-based RS, using our defined metric KGER (KG utilization efficiency in recommendation). We consider the scenarios where knowledge in a KG gets completely removed, randomly distorted and decreased, and also where recommendations are for cold-start users. Our extensive experiments on four commonly used datasets and a number of state-of-the-art KG-based RSs reveal that: to remove, randomly distort or decrease knowledge does not necessarily decrease recommendation accuracy, even for cold-start users. These findings inspire us to rethink how to better utilize knowledge from existing KGs, whereby we discuss and provide insights into what characteristics of datasets and KG-based RSs may help improve KG utilization efficiency.
[ { "created": "Thu, 4 Apr 2024 02:32:58 GMT", "version": "v1" } ]
2024-04-05
[ [ "Zhang", "Haonan", "" ], [ "Wang", "Dongxia", "" ], [ "Sun", "Zhu", "" ], [ "Li", "Yanhui", "" ], [ "Sun", "Youcheng", "" ], [ "Liang", "Huizhi", "" ], [ "Wang", "Wenhai", "" ] ]
Recommender systems (RSs) are designed to provide personalized recommendations to users. Recently, knowledge graphs (KGs) have been widely introduced in RSs to improve recommendation accuracy. In this study, however, we demonstrate that RSs do not necessarily perform worse even if the KG is downgraded to the user-item interaction graph only (or removed). We propose an evaluation framework KG4RecEval to systematically evaluate how much a KG contributes to the recommendation accuracy of a KG-based RS, using our defined metric KGER (KG utilization efficiency in recommendation). We consider the scenarios where knowledge in a KG gets completely removed, randomly distorted and decreased, and also where recommendations are for cold-start users. Our extensive experiments on four commonly used datasets and a number of state-of-the-art KG-based RSs reveal that: to remove, randomly distort or decrease knowledge does not necessarily decrease recommendation accuracy, even for cold-start users. These findings inspire us to rethink how to better utilize knowledge from existing KGs, whereby we discuss and provide insights into what characteristics of datasets and KG-based RSs may help improve KG utilization efficiency.
2312.03558
Wenhui Wang
Wenhui Wang, Shuming Ma, Hanwen Xu, Naoto Usuyama, Jiayu Ding, Hoifung Poon, Furu Wei
When an Image is Worth 1,024 x 1,024 Words: A Case Study in Computational Pathology
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical report presents LongViT, a vision Transformer that can process gigapixel images in an end-to-end manner. Specifically, we split the gigapixel image into a sequence of millions of patches and project them linearly into embeddings. LongNet is then employed to model the extremely long sequence, generating representations that capture both short-range and long-range dependencies. The linear computation complexity of LongNet, along with its distributed algorithm, enables us to overcome the constraints of both computation and memory. We apply LongViT in the field of computational pathology, aiming for cancer diagnosis and prognosis within gigapixel whole-slide images. Experimental results demonstrate that LongViT effectively encodes gigapixel images and outperforms previous state-of-the-art methods on cancer subtyping and survival prediction. Code and models will be available at https://aka.ms/LongViT.
[ { "created": "Wed, 6 Dec 2023 15:40:28 GMT", "version": "v1" } ]
2023-12-07
[ [ "Wang", "Wenhui", "" ], [ "Ma", "Shuming", "" ], [ "Xu", "Hanwen", "" ], [ "Usuyama", "Naoto", "" ], [ "Ding", "Jiayu", "" ], [ "Poon", "Hoifung", "" ], [ "Wei", "Furu", "" ] ]
This technical report presents LongViT, a vision Transformer that can process gigapixel images in an end-to-end manner. Specifically, we split the gigapixel image into a sequence of millions of patches and project them linearly into embeddings. LongNet is then employed to model the extremely long sequence, generating representations that capture both short-range and long-range dependencies. The linear computation complexity of LongNet, along with its distributed algorithm, enables us to overcome the constraints of both computation and memory. We apply LongViT in the field of computational pathology, aiming for cancer diagnosis and prognosis within gigapixel whole-slide images. Experimental results demonstrate that LongViT effectively encodes gigapixel images and outperforms previous state-of-the-art methods on cancer subtyping and survival prediction. Code and models will be available at https://aka.ms/LongViT.
2310.11644
Morteza Fayazi
Serafina Kamp, Morteza Fayazi, Zineb Benameur-El, Shuyan Yu, Ronald Dreslinski
Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications
15 pages, 9 figures
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the abundant amount of available online and offline text data, there arises a crucial need to extract the relation between phrases and summarize the main content of each document in a few words. For this purpose, there have been many studies recently in Open Information Extraction (OIE). OIE improves upon relation extraction techniques by analyzing relations across different domains and avoids requiring hand-labeling pre-specified relations in sentences. This paper surveys recent approaches of OIE and its applications on Knowledge Graph (KG), text summarization, and Question Answering (QA). Moreover, the paper describes OIE basis methods in relation extraction. It briefly discusses the main approaches and the pros and cons of each method. Finally, it gives an overview about challenges, open issues, and future work opportunities for OIE, relation extraction, and OIE applications.
[ { "created": "Wed, 18 Oct 2023 01:06:01 GMT", "version": "v1" } ]
2023-10-19
[ [ "Kamp", "Serafina", "" ], [ "Fayazi", "Morteza", "" ], [ "Benameur-El", "Zineb", "" ], [ "Yu", "Shuyan", "" ], [ "Dreslinski", "Ronald", "" ] ]
With the abundant amount of available online and offline text data, there arises a crucial need to extract the relation between phrases and summarize the main content of each document in a few words. For this purpose, there have been many studies recently in Open Information Extraction (OIE). OIE improves upon relation extraction techniques by analyzing relations across different domains and avoids requiring hand-labeling pre-specified relations in sentences. This paper surveys recent approaches of OIE and its applications on Knowledge Graph (KG), text summarization, and Question Answering (QA). Moreover, the paper describes OIE basis methods in relation extraction. It briefly discusses the main approaches and the pros and cons of each method. Finally, it gives an overview about challenges, open issues, and future work opportunities for OIE, relation extraction, and OIE applications.
2310.18042
Alberto Sonnino
Sam Blackshear, Andrey Chursin, George Danezis, Anastasios Kichidis, Lefteris Kokoris-Kogias, Xun Li, Mark Logan, Ashok Menon, Todd Nowacki, Alberto Sonnino, Brandon Williams, Lu Zhang
Sui Lutris: A Blockchain Combining Broadcast and Consensus
null
null
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.
[ { "created": "Fri, 27 Oct 2023 10:40:11 GMT", "version": "v1" }, { "created": "Wed, 1 May 2024 13:14:03 GMT", "version": "v2" }, { "created": "Mon, 6 May 2024 10:50:44 GMT", "version": "v3" }, { "created": "Mon, 12 Aug 2024 08:09:19 GMT", "version": "v4" } ]
2024-08-13
[ [ "Blackshear", "Sam", "" ], [ "Chursin", "Andrey", "" ], [ "Danezis", "George", "" ], [ "Kichidis", "Anastasios", "" ], [ "Kokoris-Kogias", "Lefteris", "" ], [ "Li", "Xun", "" ], [ "Logan", "Mark", "" ], [ "Menon", "Ashok", "" ], [ "Nowacki", "Todd", "" ], [ "Sonnino", "Alberto", "" ], [ "Williams", "Brandon", "" ], [ "Zhang", "Lu", "" ] ]
Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.
2211.00806
Hamid Hosseinianfar
Hamid Hosseinianfar, Hami Rabbani, and Maite Brandt-Pearce
Optical Channel Impulse Response-Based Localization Using An Artificial Neural Network
null
null
null
null
cs.IT cs.LG eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Visible light positioning has the potential to yield sub-centimeter accuracy in indoor environments, yet conventional received signal strength (RSS)-based localization algorithms cannot achieve this because their performance degrades from optical multipath reflection. However, this part of the optical received signal is deterministic due to the often static and predictable nature of the optical wireless channel. In this paper, the performance of optical channel impulse response (OCIR)-based localization is studied using an artificial neural network (ANN) to map embedded features of the OCIR to the user equipment's location. Numerical results show that OCIR-based localization outperforms conventional RSS techniques by two orders of magnitude using only two photodetectors as anchor points. The ANN technique can take advantage of multipath features in a wide range of scenarios, from using only the DC value to relying on high-resolution time sampling that can result in sub-centimeter accuracy.
[ { "created": "Wed, 2 Nov 2022 00:54:18 GMT", "version": "v1" }, { "created": "Fri, 4 Nov 2022 18:59:10 GMT", "version": "v2" } ]
2022-11-08
[ [ "Hosseinianfar", "Hamid", "" ], [ "Rabbani", "Hami", "" ], [ "Brandt-Pearce", "Maite", "" ] ]
Visible light positioning has the potential to yield sub-centimeter accuracy in indoor environments, yet conventional received signal strength (RSS)-based localization algorithms cannot achieve this because their performance degrades from optical multipath reflection. However, this part of the optical received signal is deterministic due to the often static and predictable nature of the optical wireless channel. In this paper, the performance of optical channel impulse response (OCIR)-based localization is studied using an artificial neural network (ANN) to map embedded features of the OCIR to the user equipment's location. Numerical results show that OCIR-based localization outperforms conventional RSS techniques by two orders of magnitude using only two photodetectors as anchor points. The ANN technique can take advantage of multipath features in a wide range of scenarios, from using only the DC value to relying on high-resolution time sampling that can result in sub-centimeter accuracy.
2303.15114
Aidana Massalimova
Aidana Massalimova, Maikel Timmermans, Nicola Cavalcanti, Daniel Suter, Matthias Seibold, Fabio Carrillo, Christoph J. Laux, Reto Sutter, Mazda Farshad, Kathleen Denis, Philipp F\"urnstahl
Automatic breach detection during spine pedicle drilling based on vibroacoustic sensing
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Pedicle drilling is a complex and critical spinal surgery task. Detecting breach or penetration of the surgical tool to the cortical wall during pilot-hole drilling is essential to avoid damage to vital anatomical structures adjacent to the pedicle, such as the spinal cord, blood vessels, and nerves. Currently, the guidance of pedicle drilling is done using image-guided methods that are radiation intensive and limited to the preoperative information. This work proposes a new radiation-free breach detection algorithm leveraging a non-visual sensor setup in combination with deep learning approach. Multiple vibroacoustic sensors, such as a contact microphone, a free-field microphone, a tri-axial accelerometer, a uni-axial accelerometer, and an optical tracking system were integrated into the setup. Data were collected on four cadaveric human spines, ranging from L5 to T10. An experienced spine surgeon drilled the pedicles relying on optical navigation. A new automatic labeling method based on the tracking data was introduced. Labeled data was subsequently fed to the network in mel-spectrograms, classifying the data into breach and non-breach. Different sensor types, sensor positioning, and their combinations were evaluated. The best results in breach recall for individual sensors could be achieved using contact microphones attached to the dorsal skin (85.8\%) and uni-axial accelerometers clamped to the spinous process of the drilled vertebra (81.0\%). The best-performing data fusion model combined the latter two sensors with a breach recall of 98\%. The proposed method shows the great potential of non-visual sensor fusion for avoiding screw misplacement and accidental bone breaches during pedicle drilling and could be extended to further surgical applications.
[ { "created": "Mon, 27 Mar 2023 11:32:14 GMT", "version": "v1" } ]
2023-03-28
[ [ "Massalimova", "Aidana", "" ], [ "Timmermans", "Maikel", "" ], [ "Cavalcanti", "Nicola", "" ], [ "Suter", "Daniel", "" ], [ "Seibold", "Matthias", "" ], [ "Carrillo", "Fabio", "" ], [ "Laux", "Christoph J.", "" ], [ "Sutter", "Reto", "" ], [ "Farshad", "Mazda", "" ], [ "Denis", "Kathleen", "" ], [ "Fürnstahl", "Philipp", "" ] ]
Pedicle drilling is a complex and critical spinal surgery task. Detecting breach or penetration of the surgical tool to the cortical wall during pilot-hole drilling is essential to avoid damage to vital anatomical structures adjacent to the pedicle, such as the spinal cord, blood vessels, and nerves. Currently, the guidance of pedicle drilling is done using image-guided methods that are radiation intensive and limited to the preoperative information. This work proposes a new radiation-free breach detection algorithm leveraging a non-visual sensor setup in combination with deep learning approach. Multiple vibroacoustic sensors, such as a contact microphone, a free-field microphone, a tri-axial accelerometer, a uni-axial accelerometer, and an optical tracking system were integrated into the setup. Data were collected on four cadaveric human spines, ranging from L5 to T10. An experienced spine surgeon drilled the pedicles relying on optical navigation. A new automatic labeling method based on the tracking data was introduced. Labeled data was subsequently fed to the network in mel-spectrograms, classifying the data into breach and non-breach. Different sensor types, sensor positioning, and their combinations were evaluated. The best results in breach recall for individual sensors could be achieved using contact microphones attached to the dorsal skin (85.8\%) and uni-axial accelerometers clamped to the spinous process of the drilled vertebra (81.0\%). The best-performing data fusion model combined the latter two sensors with a breach recall of 98\%. The proposed method shows the great potential of non-visual sensor fusion for avoiding screw misplacement and accidental bone breaches during pedicle drilling and could be extended to further surgical applications.
2302.08005
Hongzheng Chen
Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang
Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training
Accepted to ASPLOS'24
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent years have seen an increase in the development of large deep learning (DL) models, which makes training efficiency crucial. Common practice is struggling with the trade-off between usability and performance. On one hand, DL frameworks such as PyTorch use dynamic graphs to facilitate model developers at a price of sub-optimal model training performance. On the other hand, practitioners propose various approaches to improving the training efficiency by sacrificing some of the flexibility, ranging from making the graph static for more thorough optimization (e.g., XLA) to customizing optimization towards large-scale distributed training (e.g., DeepSpeed and Megatron-LM). In this paper, we aim to address the tension between usability and training efficiency through separation of concerns. Inspired by DL compilers that decouple the platform-specific optimizations of a tensor-level operator from its arithmetic definition, this paper proposes a schedule language, Slapo, to decouple model execution from definition. Specifically, Slapo works on a PyTorch model and uses a set of schedule primitives to convert the model for common model training optimizations such as high-performance kernels, effective 3D parallelism, and efficient activation checkpointing. Compared to existing optimization solutions, Slapo progressively optimizes the model "as-needed" through high-level primitives, and thus preserving programmability and debuggability for users to a large extent. Our evaluation results show that by scheduling the existing hand-crafted optimizations in a systematic way using Slapo, we are able to improve training throughput by up to 2.92x on a single machine with 8 NVIDIA V100 GPUs, and by up to 1.41x on multiple machines with up to 64 GPUs, when compared to the out-of-the-box performance of DeepSpeed and Megatron-LM.
[ { "created": "Thu, 16 Feb 2023 00:34:53 GMT", "version": "v1" }, { "created": "Sat, 23 Dec 2023 03:52:35 GMT", "version": "v2" } ]
2023-12-27
[ [ "Chen", "Hongzheng", "" ], [ "Yu", "Cody Hao", "" ], [ "Zheng", "Shuai", "" ], [ "Zhang", "Zhen", "" ], [ "Zhang", "Zhiru", "" ], [ "Wang", "Yida", "" ] ]
Recent years have seen an increase in the development of large deep learning (DL) models, which makes training efficiency crucial. Common practice is struggling with the trade-off between usability and performance. On one hand, DL frameworks such as PyTorch use dynamic graphs to facilitate model developers at a price of sub-optimal model training performance. On the other hand, practitioners propose various approaches to improving the training efficiency by sacrificing some of the flexibility, ranging from making the graph static for more thorough optimization (e.g., XLA) to customizing optimization towards large-scale distributed training (e.g., DeepSpeed and Megatron-LM). In this paper, we aim to address the tension between usability and training efficiency through separation of concerns. Inspired by DL compilers that decouple the platform-specific optimizations of a tensor-level operator from its arithmetic definition, this paper proposes a schedule language, Slapo, to decouple model execution from definition. Specifically, Slapo works on a PyTorch model and uses a set of schedule primitives to convert the model for common model training optimizations such as high-performance kernels, effective 3D parallelism, and efficient activation checkpointing. Compared to existing optimization solutions, Slapo progressively optimizes the model "as-needed" through high-level primitives, and thus preserving programmability and debuggability for users to a large extent. Our evaluation results show that by scheduling the existing hand-crafted optimizations in a systematic way using Slapo, we are able to improve training throughput by up to 2.92x on a single machine with 8 NVIDIA V100 GPUs, and by up to 1.41x on multiple machines with up to 64 GPUs, when compared to the out-of-the-box performance of DeepSpeed and Megatron-LM.
1305.2006
Jierui Xie
Jierui Xie, Mingming Chen, Boleslaw K. Szymanski
LabelRankT: Incremental Community Detection in Dynamic Networks via Label Propagation
DyNetMM 2013, New York, USA (conjunction with SIGMOD/PODS 2013)
Proc. DyNetMM 2013 at SIGMOD/PODS 2013, New York, NY, 2013
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasingly important challenge in network analysis is efficient detection and tracking of communities in dynamic networks for which changes arrive as a stream. There is a need for algorithms that can incrementally update and monitor communities whose evolution generates huge realtime data streams, such as the Internet or on-line social networks. In this paper, we propose LabelRankT, an online distributed algorithm for detection of communities in large-scale dynamic networks through stabilized label propagation. Results of tests on real-world networks demonstrate that LabelRankT has much lower computational costs than other algorithms. It also improves the quality of the detected communities compared to dynamic detection methods and matches the quality achieved by static detection approaches. Unlike most of other algorithms which apply only to binary networks, LabelRankT works on weighted and directed networks, which provides a flexible and promising solution for real-world applications.
[ { "created": "Thu, 9 May 2013 04:01:46 GMT", "version": "v1" }, { "created": "Sun, 12 May 2013 18:41:13 GMT", "version": "v2" } ]
2013-05-15
[ [ "Xie", "Jierui", "" ], [ "Chen", "Mingming", "" ], [ "Szymanski", "Boleslaw K.", "" ] ]
An increasingly important challenge in network analysis is efficient detection and tracking of communities in dynamic networks for which changes arrive as a stream. There is a need for algorithms that can incrementally update and monitor communities whose evolution generates huge realtime data streams, such as the Internet or on-line social networks. In this paper, we propose LabelRankT, an online distributed algorithm for detection of communities in large-scale dynamic networks through stabilized label propagation. Results of tests on real-world networks demonstrate that LabelRankT has much lower computational costs than other algorithms. It also improves the quality of the detected communities compared to dynamic detection methods and matches the quality achieved by static detection approaches. Unlike most of other algorithms which apply only to binary networks, LabelRankT works on weighted and directed networks, which provides a flexible and promising solution for real-world applications.
1602.07285
Rohit Singh
Rohit Singh, Armando Solar-Lezama
Automatic Generation of Formula Simplifiers based on Conditional Rewrite Rules
Submitted for peer reviewed conference
null
null
null
cs.PL
http://creativecommons.org/licenses/by/4.0/
This paper addresses the problem of creating simplifiers for logic formulas based on conditional term rewriting. In particular, the paper focuses on a program synthesis application where formula simplifications have been shown to have a significant impact. We show that by combining machine learning techniques with constraint-based synthesis, it is possible to synthesize a formula simplifier fully automatically from a corpus of representative problems, making it possible to create formula simplifiers tailored to specific problem domains. We demonstrate the benefits of our approach for synthesis benchmarks from the SyGuS competition and automated grading.
[ { "created": "Tue, 23 Feb 2016 20:09:33 GMT", "version": "v1" } ]
2016-02-24
[ [ "Singh", "Rohit", "" ], [ "Solar-Lezama", "Armando", "" ] ]
This paper addresses the problem of creating simplifiers for logic formulas based on conditional term rewriting. In particular, the paper focuses on a program synthesis application where formula simplifications have been shown to have a significant impact. We show that by combining machine learning techniques with constraint-based synthesis, it is possible to synthesize a formula simplifier fully automatically from a corpus of representative problems, making it possible to create formula simplifiers tailored to specific problem domains. We demonstrate the benefits of our approach for synthesis benchmarks from the SyGuS competition and automated grading.
2301.04205
Benjamin Mikek
Saksham Goel, Benjamin Mikek, Jehad Aly, Venkat Arun, Ahmed Saeed, Aditya Akella
A Performance Verification Methodology for Resource Allocation Heuristics
12 pages, 11 figures
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Performance verification is a nascent but promising tool for understanding the performance and limitations of heuristics under realistic assumptions. Bespoke performance verification tools have already demonstrated their value in settings like congestion control and packet scheduling. In this paper, we aim to emphasize the broad applicability and utility of performance verification. To that end, we highlight the design principles of performance verification. Then, we leverage that understanding to develop a set of easy-to-follow guidelines that are applicable to a wide range of resource allocation heuristics. In particular, we introduce Virelay, a framework that enables heuristic designers to express the behavior of their algorithms and their assumptions about the system in an environment that resembles a discrete-event simulator. We demonstrate the utility and ease-of-use of Virelay by applying it to six diverse case studies. We produce bounds on the performance of classical algorithms, work stealing and SRPT scheduling, under practical assumptions. We demonstrate Virelay's expressiveness by capturing existing models for congestion control and packet scheduling, and we verify the observation that TCP unfairness can cause some ML training workloads to spontaneously converge to a state of high network utilization. Finally, we use Virelay to identify two bugs in the Linux CFS load balancer.
[ { "created": "Tue, 10 Jan 2023 20:46:20 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 15:15:36 GMT", "version": "v2" } ]
2024-02-29
[ [ "Goel", "Saksham", "" ], [ "Mikek", "Benjamin", "" ], [ "Aly", "Jehad", "" ], [ "Arun", "Venkat", "" ], [ "Saeed", "Ahmed", "" ], [ "Akella", "Aditya", "" ] ]
Performance verification is a nascent but promising tool for understanding the performance and limitations of heuristics under realistic assumptions. Bespoke performance verification tools have already demonstrated their value in settings like congestion control and packet scheduling. In this paper, we aim to emphasize the broad applicability and utility of performance verification. To that end, we highlight the design principles of performance verification. Then, we leverage that understanding to develop a set of easy-to-follow guidelines that are applicable to a wide range of resource allocation heuristics. In particular, we introduce Virelay, a framework that enables heuristic designers to express the behavior of their algorithms and their assumptions about the system in an environment that resembles a discrete-event simulator. We demonstrate the utility and ease-of-use of Virelay by applying it to six diverse case studies. We produce bounds on the performance of classical algorithms, work stealing and SRPT scheduling, under practical assumptions. We demonstrate Virelay's expressiveness by capturing existing models for congestion control and packet scheduling, and we verify the observation that TCP unfairness can cause some ML training workloads to spontaneously converge to a state of high network utilization. Finally, we use Virelay to identify two bugs in the Linux CFS load balancer.
1404.2772
Ravi Ranjan
Ravi Ranjan and G. Sahoo
A New Clustering Approach for Anomaly Intrusion Detection
10 pages with 3 figures,2 Tables This paper explains about clustering methodology used in Data Mining field for Intrusion Detection in the area of Network Security
International Journal of Data Mining & Knowledge Management Process (IJDKP),ISSN:2230-9608[Online],2231-007X[Print] Vol.4, No.2, March 2014, page(s): 29-38
10.5121/ijdkp.2014.4203
null
cs.DC cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in technology have made our work easier compare to earlier times. Computer network is growing day by day but while discussing about the security of computers and networks it has always been a major concerns for organizations varying from smaller to larger enterprises. It is true that organizations are aware of the possible threats and attacks so they always prepare for the safer side but due to some loopholes attackers are able to make attacks. Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion detection by using the approach of K-medoids method of clustering and its certain modifications. The proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means algorithm.
[ { "created": "Thu, 10 Apr 2014 11:22:17 GMT", "version": "v1" } ]
2014-04-11
[ [ "Ranjan", "Ravi", "" ], [ "Sahoo", "G.", "" ] ]
Recent advances in technology have made our work easier compare to earlier times. Computer network is growing day by day but while discussing about the security of computers and networks it has always been a major concerns for organizations varying from smaller to larger enterprises. It is true that organizations are aware of the possible threats and attacks so they always prepare for the safer side but due to some loopholes attackers are able to make attacks. Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion detection by using the approach of K-medoids method of clustering and its certain modifications. The proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means algorithm.
1305.4583
Xin Zhao
Xin Zhao
Parallel Coordinates Guided High Dimensional Transfer Function Design
6 pages, 5 figures. This paper has been withdrawn by the author due to publication
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-dimensional transfer function design is widely used to provide appropriate data classification for direct volume rendering of various datasets. However, its design is a complicated task. Parallel coordinate plot (PCP), as a powerful visualization tool, can efficiently display high-dimensional geometry and accurately analyze multivariate data. In this paper, we propose to combine parallel coordinates with dimensional reduction methods to guide high-dimensional transfer function design. Our pipeline has two major advantages: (1) combine and display extracted high-dimensional features in parameter space; and (2) select appropriate high-dimensional parameters, with the help of dimensional reduction methods, to obtain sophisticated data classification as transfer function for volume rendering. In order to efficiently design high-dimensional transfer functions, the combination of both parallel coordinate components and dimension reduction results is necessary to generate final visualization results. We demonstrate the capability of our method for direct volume rendering using various CT and MRI datasets.
[ { "created": "Mon, 20 May 2013 17:27:29 GMT", "version": "v1" }, { "created": "Sun, 3 Nov 2013 21:39:13 GMT", "version": "v2" } ]
2013-11-05
[ [ "Zhao", "Xin", "" ] ]
High-dimensional transfer function design is widely used to provide appropriate data classification for direct volume rendering of various datasets. However, its design is a complicated task. Parallel coordinate plot (PCP), as a powerful visualization tool, can efficiently display high-dimensional geometry and accurately analyze multivariate data. In this paper, we propose to combine parallel coordinates with dimensional reduction methods to guide high-dimensional transfer function design. Our pipeline has two major advantages: (1) combine and display extracted high-dimensional features in parameter space; and (2) select appropriate high-dimensional parameters, with the help of dimensional reduction methods, to obtain sophisticated data classification as transfer function for volume rendering. In order to efficiently design high-dimensional transfer functions, the combination of both parallel coordinate components and dimension reduction results is necessary to generate final visualization results. We demonstrate the capability of our method for direct volume rendering using various CT and MRI datasets.
1202.4626
Avraham N. Trahtman
A. N. Trahtman
The \v{C}erny conjecture
14 pages, 11 Lemmas, most of which are considered trivial by various reviewers. Everything goes to that the main result is also trivial. And the author himself is inclined to admit it
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A word $w$ of letters on edges of underlying graph $\Gamma$ of deterministic finite automaton (DFA) is called synchronizing if $w$ sends all states of the automaton to a unique state. J. \v{C}erny discovered in 1964 a sequence of $n$-state complete DFA possessing a minimal synchronizing word of length $(n-1)^2$. The hypothesis, well known today as the \v{C}erny conjecture, claims that it is also precise upper bound on the length of such a word for a complete DFA. The hypothesis was formulated in 1966 by Starke. The problem has motivated great and constantly growing number of investigations and generalizations. To prove the conjecture, we use algebra w on a special class of row monomial matrices (one unit and rest zeros in every row), induced by words in the alphabet of labels on edges. These matrices generate a space with respect to the mentioned operation. The proof is based on connection between length of words $u$ and dimension of the space generated by solutions $L_x$ of matrix equation $M_uL_x=M_s$ for synchronizing word $s$, as well as on the relation between ranks of $M_u$ and $L_x$.
[ { "created": "Tue, 21 Feb 2012 12:50:14 GMT", "version": "v1" }, { "created": "Mon, 14 Jun 2021 15:24:13 GMT", "version": "v10" }, { "created": "Tue, 18 Jan 2022 11:16:53 GMT", "version": "v11" }, { "created": "Sat, 25 Feb 2012 09:42:30 GMT", "version": "v2" }, { "created": "Wed, 29 Feb 2012 08:58:28 GMT", "version": "v3" }, { "created": "Mon, 19 Aug 2013 18:54:12 GMT", "version": "v4" }, { "created": "Thu, 29 Aug 2013 06:51:30 GMT", "version": "v5" }, { "created": "Thu, 17 Oct 2013 07:22:11 GMT", "version": "v6" }, { "created": "Thu, 20 Mar 2014 13:29:06 GMT", "version": "v7" }, { "created": "Fri, 16 Sep 2016 14:55:56 GMT", "version": "v8" }, { "created": "Tue, 4 Jul 2017 10:30:27 GMT", "version": "v9" } ]
2022-01-19
[ [ "Trahtman", "A. N.", "" ] ]
A word $w$ of letters on edges of underlying graph $\Gamma$ of deterministic finite automaton (DFA) is called synchronizing if $w$ sends all states of the automaton to a unique state. J. \v{C}erny discovered in 1964 a sequence of $n$-state complete DFA possessing a minimal synchronizing word of length $(n-1)^2$. The hypothesis, well known today as the \v{C}erny conjecture, claims that it is also precise upper bound on the length of such a word for a complete DFA. The hypothesis was formulated in 1966 by Starke. The problem has motivated great and constantly growing number of investigations and generalizations. To prove the conjecture, we use algebra w on a special class of row monomial matrices (one unit and rest zeros in every row), induced by words in the alphabet of labels on edges. These matrices generate a space with respect to the mentioned operation. The proof is based on connection between length of words $u$ and dimension of the space generated by solutions $L_x$ of matrix equation $M_uL_x=M_s$ for synchronizing word $s$, as well as on the relation between ranks of $M_u$ and $L_x$.
2101.01677
Vitor Guizilini
Rares Ambrus, Vitor Guizilini, Naveen Kuppuswamy, Andrew Beaulieu, Adrien Gaidon, Alex Alspach
Monocular Depth Estimation for Soft Visuotactile Sensors
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Fluid-filled soft visuotactile sensors such as the Soft-bubbles alleviate key challenges for robust manipulation, as they enable reliable grasps along with the ability to obtain high-resolution sensory feedback on contact geometry and forces. Although they are simple in construction, their utility has been limited due to size constraints introduced by enclosed custom IR/depth imaging sensors to directly measure surface deformations. Towards mitigating this limitation, we investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from the internal single small IR imaging sensor. Through real-world experiments, we show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor. We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects (mug, wine glass, box, and fingers in our experiments). We show that our approach is sample-efficient, accurate, and generalizes across different objects and sensor configurations unseen at training time. Finally, we discuss the implications of our approach for the design of soft visuotactile sensors and grippers.
[ { "created": "Tue, 5 Jan 2021 17:51:11 GMT", "version": "v1" } ]
2021-01-06
[ [ "Ambrus", "Rares", "" ], [ "Guizilini", "Vitor", "" ], [ "Kuppuswamy", "Naveen", "" ], [ "Beaulieu", "Andrew", "" ], [ "Gaidon", "Adrien", "" ], [ "Alspach", "Alex", "" ] ]
Fluid-filled soft visuotactile sensors such as the Soft-bubbles alleviate key challenges for robust manipulation, as they enable reliable grasps along with the ability to obtain high-resolution sensory feedback on contact geometry and forces. Although they are simple in construction, their utility has been limited due to size constraints introduced by enclosed custom IR/depth imaging sensors to directly measure surface deformations. Towards mitigating this limitation, we investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from the internal single small IR imaging sensor. Through real-world experiments, we show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor. We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects (mug, wine glass, box, and fingers in our experiments). We show that our approach is sample-efficient, accurate, and generalizes across different objects and sensor configurations unseen at training time. Finally, we discuss the implications of our approach for the design of soft visuotactile sensors and grippers.
2308.16248
Danai Korre
Danai Korre and Andrew Sherlock
Augmented Reality in Higher Education: a Case Study in Medical Education
4 pages, 2 figures, 9th International Conference of the Immersive Learning Research Network (iLRN2023)
null
null
null
cs.HC cs.ET
http://creativecommons.org/licenses/by-sa/4.0/
During lockdown, we piloted a variety of augmented reality (AR) experiences in collaboration with subject matter experts from different fields aiming at creating remote teaching and training experiences. In this paper, we present a case study on how AR can be used as a teaching aid for medical education with pertinent focus on remote and social distanced learning. We describe the process of creating an AR experience that can enhance the knowledge and understanding of anatomy for medical students. The Anatomy Experience is an AR enhanced learning experience developed in collaboration with the Medical School of the University of Edinburgh aiming to assist medical students understand the complex geometry of different parts of the human body. After conducting a focus group study with medical students, trainees, and trainers, we received very positive feedback on the Anatomy Experience and its effects on understanding anatomy, enriching the learning process, and using it as a tool for anatomy teaching.
[ { "created": "Wed, 30 Aug 2023 18:11:58 GMT", "version": "v1" } ]
2023-09-14
[ [ "Korre", "Danai", "" ], [ "Sherlock", "Andrew", "" ] ]
During lockdown, we piloted a variety of augmented reality (AR) experiences in collaboration with subject matter experts from different fields aiming at creating remote teaching and training experiences. In this paper, we present a case study on how AR can be used as a teaching aid for medical education with pertinent focus on remote and social distanced learning. We describe the process of creating an AR experience that can enhance the knowledge and understanding of anatomy for medical students. The Anatomy Experience is an AR enhanced learning experience developed in collaboration with the Medical School of the University of Edinburgh aiming to assist medical students understand the complex geometry of different parts of the human body. After conducting a focus group study with medical students, trainees, and trainers, we received very positive feedback on the Anatomy Experience and its effects on understanding anatomy, enriching the learning process, and using it as a tool for anatomy teaching.
2004.04446
Yuqing Wang
Yuqing Wang, Zhaoliang Xu, Hao Shen, Baoshan Cheng, Lirong Yang
CenterMask: single shot instance segmentation with point representation
To appear at CVPR 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a single-shot instance segmentation method, which is simple, fast and accurate. There are two main challenges for one-stage instance segmentation: object instances differentiation and pixel-wise feature alignment. Accordingly, we decompose the instance segmentation into two parallel subtasks: Local Shape prediction that separates instances even in overlapping conditions, and Global Saliency generation that segments the whole image in a pixel-to-pixel manner. The outputs of the two branches are assembled to form the final instance masks. To realize that, the local shape information is adopted from the representation of object center points. Totally trained from scratch and without any bells and whistles, the proposed CenterMask achieves 34.5 mask AP with a speed of 12.3 fps, using a single-model with single-scale training/testing on the challenging COCO dataset. The accuracy is higher than all other one-stage instance segmentation methods except the 5 times slower TensorMask, which shows the effectiveness of CenterMask. Besides, our method can be easily embedded to other one-stage object detectors such as FCOS and performs well, showing the generalization of CenterMask.
[ { "created": "Thu, 9 Apr 2020 09:35:15 GMT", "version": "v1" }, { "created": "Sat, 11 Apr 2020 05:12:10 GMT", "version": "v2" } ]
2020-04-14
[ [ "Wang", "Yuqing", "" ], [ "Xu", "Zhaoliang", "" ], [ "Shen", "Hao", "" ], [ "Cheng", "Baoshan", "" ], [ "Yang", "Lirong", "" ] ]
In this paper, we propose a single-shot instance segmentation method, which is simple, fast and accurate. There are two main challenges for one-stage instance segmentation: object instances differentiation and pixel-wise feature alignment. Accordingly, we decompose the instance segmentation into two parallel subtasks: Local Shape prediction that separates instances even in overlapping conditions, and Global Saliency generation that segments the whole image in a pixel-to-pixel manner. The outputs of the two branches are assembled to form the final instance masks. To realize that, the local shape information is adopted from the representation of object center points. Totally trained from scratch and without any bells and whistles, the proposed CenterMask achieves 34.5 mask AP with a speed of 12.3 fps, using a single-model with single-scale training/testing on the challenging COCO dataset. The accuracy is higher than all other one-stage instance segmentation methods except the 5 times slower TensorMask, which shows the effectiveness of CenterMask. Besides, our method can be easily embedded to other one-stage object detectors such as FCOS and performs well, showing the generalization of CenterMask.
1809.05353
Diego Rodriguez
Diego Rodriguez, Corbin Cogswell, Seongyong Koo, and Sven Behnke
Transferring Grasping Skills to Novel Instances by Latent Space Non-Rigid Registration
In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 2018
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robots acting in open environments need to be able to handle novel objects. Based on the observation that objects within a category are often similar in their shapes and usage, we propose an approach for transferring grasping skills from known instances to novel instances of an object category. Correspondences between the instances are established by means of a non-rigid registration method that combines the Coherent Point Drift approach with subspace methods. The known object instances are modeled using a canonical shape and a transformation which deforms it to match the instance shape. The principle axes of variation of these deformations define a low-dimensional latent space. New instances can be generated through interpolation and extrapolation in this shape space. For inferring the shape parameters of an unknown instance, an energy function expressed in terms of the latent variables is minimized. Due to the class-level knowledge of the object, our method is able to complete novel shapes from partial views. Control poses for generating grasping motions are transferred efficiently to novel instances by the estimated non-rigid transformation.
[ { "created": "Fri, 14 Sep 2018 11:06:58 GMT", "version": "v1" } ]
2018-09-17
[ [ "Rodriguez", "Diego", "" ], [ "Cogswell", "Corbin", "" ], [ "Koo", "Seongyong", "" ], [ "Behnke", "Sven", "" ] ]
Robots acting in open environments need to be able to handle novel objects. Based on the observation that objects within a category are often similar in their shapes and usage, we propose an approach for transferring grasping skills from known instances to novel instances of an object category. Correspondences between the instances are established by means of a non-rigid registration method that combines the Coherent Point Drift approach with subspace methods. The known object instances are modeled using a canonical shape and a transformation which deforms it to match the instance shape. The principle axes of variation of these deformations define a low-dimensional latent space. New instances can be generated through interpolation and extrapolation in this shape space. For inferring the shape parameters of an unknown instance, an energy function expressed in terms of the latent variables is minimized. Due to the class-level knowledge of the object, our method is able to complete novel shapes from partial views. Control poses for generating grasping motions are transferred efficiently to novel instances by the estimated non-rigid transformation.
1403.3905
Michael Hemmer
Francisc Bungiu and Michael Hemmer and John Hershberger and Kan Huang and Alexander Kr\"oller
Efficient Computation of Visibility Polygons
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining visibility in planar polygons and arrangements is an important subroutine for many algorithms in computational geometry. In this paper, we report on new implementations, and corresponding experimental evaluations, for two established and one novel algorithm for computing visibility polygons. These algorithms will be released to the public shortly, as a new package for the Computational Geometry Algorithms Library (CGAL).
[ { "created": "Sun, 16 Mar 2014 11:07:49 GMT", "version": "v1" } ]
2014-03-18
[ [ "Bungiu", "Francisc", "" ], [ "Hemmer", "Michael", "" ], [ "Hershberger", "John", "" ], [ "Huang", "Kan", "" ], [ "Kröller", "Alexander", "" ] ]
Determining visibility in planar polygons and arrangements is an important subroutine for many algorithms in computational geometry. In this paper, we report on new implementations, and corresponding experimental evaluations, for two established and one novel algorithm for computing visibility polygons. These algorithms will be released to the public shortly, as a new package for the Computational Geometry Algorithms Library (CGAL).
1606.07383
Soheil Feizi
Soheil Feizi, Muriel Medard, Gerald Quon, Manolis Kellis and Ken Duffy
Network Infusion to Infer Information Sources in Networks
21 pages, 13 figures
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several significant models have been developed that enable the study of diffusion of signals across biological, social and engineered networks. Within these established frameworks, the inverse problem of identifying the source of the propagated signal is challenging, owing to the numerous alternative possibilities for signal progression through the network. In real world networks, the challenge of determining sources is compounded as the true propagation dynamics are typically unknown, and when they have been directly measured, they rarely conform to the assumptions of any of the well-studied models. In this paper we introduce a method called Network Infusion (NI) that has been designed to circumvent these issues, making source inference practical for large, complex real world networks. The key idea is that to infer the source node in the network, full characterization of diffusion dynamics, in many cases, may not be necessary. This objective is achieved by creating a diffusion kernel that well-approximates standard diffusion models, but lends itself to inversion, by design, via likelihood maximization or error minimization. We apply NI for both single-source and multi-source diffusion, for both single-snapshot and multi-snapshot observations, and for both homogeneous and heterogeneous diffusion setups. We prove the mean-field optimality of NI for different scenarios, and demonstrate its effectiveness over several synthetic networks. Moreover, we apply NI to a real-data application, identifying news sources in the Digg social network, and demonstrate the effectiveness of NI compared to existing methods. Finally, we propose an integrative source inference framework that combines NI with a distance centrality-based method, which leads to a robust performance in cases where the underlying dynamics are unknown.
[ { "created": "Thu, 23 Jun 2016 17:45:23 GMT", "version": "v1" } ]
2016-06-24
[ [ "Feizi", "Soheil", "" ], [ "Medard", "Muriel", "" ], [ "Quon", "Gerald", "" ], [ "Kellis", "Manolis", "" ], [ "Duffy", "Ken", "" ] ]
Several significant models have been developed that enable the study of diffusion of signals across biological, social and engineered networks. Within these established frameworks, the inverse problem of identifying the source of the propagated signal is challenging, owing to the numerous alternative possibilities for signal progression through the network. In real world networks, the challenge of determining sources is compounded as the true propagation dynamics are typically unknown, and when they have been directly measured, they rarely conform to the assumptions of any of the well-studied models. In this paper we introduce a method called Network Infusion (NI) that has been designed to circumvent these issues, making source inference practical for large, complex real world networks. The key idea is that to infer the source node in the network, full characterization of diffusion dynamics, in many cases, may not be necessary. This objective is achieved by creating a diffusion kernel that well-approximates standard diffusion models, but lends itself to inversion, by design, via likelihood maximization or error minimization. We apply NI for both single-source and multi-source diffusion, for both single-snapshot and multi-snapshot observations, and for both homogeneous and heterogeneous diffusion setups. We prove the mean-field optimality of NI for different scenarios, and demonstrate its effectiveness over several synthetic networks. Moreover, we apply NI to a real-data application, identifying news sources in the Digg social network, and demonstrate the effectiveness of NI compared to existing methods. Finally, we propose an integrative source inference framework that combines NI with a distance centrality-based method, which leads to a robust performance in cases where the underlying dynamics are unknown.
2211.08273
Adolf Kamuzora Mr
Adolf Kamuzora, Wadie Skaf, Ermiyas Birihanu, Jiyan Mahmud, P\'eter Kiss, Tam\'as Jursonovics, Peter Pogrzeba, Imre Lend\'ak and Tom\'a\v{s} Horv\'ath
Matrix Factorization for Cache Optimization in Content Delivery Networks (CDN)
null
22nd Industrial Conference on Data Mining 2022, New York, USA Proceedings P. 1-10
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Content delivery networks (CDNs) are key components of high throughput, low latency services on the internet. CDN cache servers have limited storage and bandwidth and implement state-of-the-art cache admission and eviction algorithms to select the most popular and relevant content for the customers served. The aim of this study was to utilize state-of-the-art recommender system techniques for predicting ratings for cache content in CDN. Matrix factorization was used in predicting content popularity which is valuable information in content eviction and content admission algorithms run on CDN edge servers. A custom implemented matrix factorization class and MyMediaLite were utilized. The input CDN logs were received from a European telecommunication service provider. We built a matrix factorization model with that data and utilized grid search to tune its hyper-parameters. Experimental results indicate that there is promise about the proposed approaches and we showed that a low root mean square error value can be achieved on the real-life CDN log data.
[ { "created": "Wed, 5 Oct 2022 11:06:32 GMT", "version": "v1" } ]
2022-11-16
[ [ "Kamuzora", "Adolf", "" ], [ "Skaf", "Wadie", "" ], [ "Birihanu", "Ermiyas", "" ], [ "Mahmud", "Jiyan", "" ], [ "Kiss", "Péter", "" ], [ "Jursonovics", "Tamás", "" ], [ "Pogrzeba", "Peter", "" ], [ "Lendák", "Imre", "" ], [ "Horváth", "Tomáš", "" ] ]
Content delivery networks (CDNs) are key components of high throughput, low latency services on the internet. CDN cache servers have limited storage and bandwidth and implement state-of-the-art cache admission and eviction algorithms to select the most popular and relevant content for the customers served. The aim of this study was to utilize state-of-the-art recommender system techniques for predicting ratings for cache content in CDN. Matrix factorization was used in predicting content popularity which is valuable information in content eviction and content admission algorithms run on CDN edge servers. A custom implemented matrix factorization class and MyMediaLite were utilized. The input CDN logs were received from a European telecommunication service provider. We built a matrix factorization model with that data and utilized grid search to tune its hyper-parameters. Experimental results indicate that there is promise about the proposed approaches and we showed that a low root mean square error value can be achieved on the real-life CDN log data.
2311.11901
Lei Fan
Lei Fan, Yiwen Ding, Dongdong Fan, Yong Wu, Maurice Pagnucco and Yang Song
Identifying the Defective: Detecting Damaged Grains for Cereal Appearance Inspection
Accepted by ECAI2023. https://github.com/hellodfan/AI4GrainInsp
null
10.3233/FAIA230329
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cereal grain plays a crucial role in the human diet as a major source of essential nutrients. Grain Appearance Inspection (GAI) serves as an essential process to determine grain quality and facilitate grain circulation and processing. However, GAI is routinely performed manually by inspectors with cumbersome procedures, which poses a significant bottleneck in smart agriculture. In this paper, we endeavor to develop an automated GAI system:AI4GrainInsp. By analyzing the distinctive characteristics of grain kernels, we formulate GAI as a ubiquitous problem: Anomaly Detection (AD), in which healthy and edible kernels are considered normal samples while damaged grains or unknown objects are regarded as anomalies. We further propose an AD model, called AD-GAI, which is trained using only normal samples yet can identify anomalies during inference. Moreover, we customize a prototype device for data acquisition and create a large-scale dataset including 220K high-quality images of wheat and maize kernels. Through extensive experiments, AD-GAI achieves considerable performance in comparison with advanced AD methods, and AI4GrainInsp has highly consistent performance compared to human experts and excels at inspection efficiency over 20x speedup. The dataset, code and models will be released at https://github.com/hellodfan/AI4GrainInsp.
[ { "created": "Mon, 20 Nov 2023 16:35:16 GMT", "version": "v1" } ]
2023-11-21
[ [ "Fan", "Lei", "" ], [ "Ding", "Yiwen", "" ], [ "Fan", "Dongdong", "" ], [ "Wu", "Yong", "" ], [ "Pagnucco", "Maurice", "" ], [ "Song", "Yang", "" ] ]
Cereal grain plays a crucial role in the human diet as a major source of essential nutrients. Grain Appearance Inspection (GAI) serves as an essential process to determine grain quality and facilitate grain circulation and processing. However, GAI is routinely performed manually by inspectors with cumbersome procedures, which poses a significant bottleneck in smart agriculture. In this paper, we endeavor to develop an automated GAI system:AI4GrainInsp. By analyzing the distinctive characteristics of grain kernels, we formulate GAI as a ubiquitous problem: Anomaly Detection (AD), in which healthy and edible kernels are considered normal samples while damaged grains or unknown objects are regarded as anomalies. We further propose an AD model, called AD-GAI, which is trained using only normal samples yet can identify anomalies during inference. Moreover, we customize a prototype device for data acquisition and create a large-scale dataset including 220K high-quality images of wheat and maize kernels. Through extensive experiments, AD-GAI achieves considerable performance in comparison with advanced AD methods, and AI4GrainInsp has highly consistent performance compared to human experts and excels at inspection efficiency over 20x speedup. The dataset, code and models will be released at https://github.com/hellodfan/AI4GrainInsp.
2305.03977
Da Ren
Da Ren, Yi Cai, Qing Li
An Adversarial Non-Autoregressive Model for Text Generation with Incomplete Information
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-autoregressive models have been widely studied in the Complete Information Scenario (CIS), in which the input has complete information of corresponding output. However, their explorations in the Incomplete Information Scenario (IIS) are extremely limited. Our analyses reveal that the IIS's incomplete input information will augment the inherent limitations of existing non-autoregressive models trained under Maximum Likelihood Estimation. In this paper, we propose for the IIS an Adversarial Non-autoregressive Transformer (ANT) which has two features: 1) Position-Aware Self-Modulation to provide more reasonable hidden representations, and 2) Dependency Feed Forward Network to strengthen its capacity in dependency modeling. We compare ANT with other mainstream models in the IIS and demonstrate that ANT can achieve comparable performance with much fewer decoding iterations. Furthermore, we show its great potential in various applications like latent interpolation and semi-supervised learning.
[ { "created": "Sat, 6 May 2023 08:43:33 GMT", "version": "v1" }, { "created": "Fri, 1 Dec 2023 15:16:19 GMT", "version": "v2" } ]
2023-12-04
[ [ "Ren", "Da", "" ], [ "Cai", "Yi", "" ], [ "Li", "Qing", "" ] ]
Non-autoregressive models have been widely studied in the Complete Information Scenario (CIS), in which the input has complete information of corresponding output. However, their explorations in the Incomplete Information Scenario (IIS) are extremely limited. Our analyses reveal that the IIS's incomplete input information will augment the inherent limitations of existing non-autoregressive models trained under Maximum Likelihood Estimation. In this paper, we propose for the IIS an Adversarial Non-autoregressive Transformer (ANT) which has two features: 1) Position-Aware Self-Modulation to provide more reasonable hidden representations, and 2) Dependency Feed Forward Network to strengthen its capacity in dependency modeling. We compare ANT with other mainstream models in the IIS and demonstrate that ANT can achieve comparable performance with much fewer decoding iterations. Furthermore, we show its great potential in various applications like latent interpolation and semi-supervised learning.
1812.10280
Torgeir Dings{\o}yr
Torgeir Dings{\o}yr, Nils Brede Moe, Helena Holmstrom Ohlsson
Towards an Understanding of Scaling Frameworks and Business Agility: A Summary of the 6th International Workshop at XP2018
Summary of workshop at XP2018
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Large development projects and programs are conducted using agile development methods, with an increasing body of advice from practitioners and from research. This sixth workshop showed in increasing interest in scaling frameworks and in topics related to achieving business agility. This article summarizes four contributed papers, discussions in "open space" format and also presents a revised research agenda for large-scale agile development.
[ { "created": "Wed, 26 Dec 2018 10:45:08 GMT", "version": "v1" } ]
2018-12-27
[ [ "Dingsøyr", "Torgeir", "" ], [ "Moe", "Nils Brede", "" ], [ "Ohlsson", "Helena Holmstrom", "" ] ]
Large development projects and programs are conducted using agile development methods, with an increasing body of advice from practitioners and from research. This sixth workshop showed in increasing interest in scaling frameworks and in topics related to achieving business agility. This article summarizes four contributed papers, discussions in "open space" format and also presents a revised research agenda for large-scale agile development.
2304.01201
Ruihan Yang
Ruihan Yang, Ge Yang, Xiaolong Wang
Neural Volumetric Memory for Visual Locomotion Control
CVPR 2023 Highlight. Our project page with videos is https://rchalyang.github.io/NVM
null
null
null
cs.RO cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Legged robots have the potential to expand the reach of autonomy beyond paved roads. In this work, we consider the difficult problem of locomotion on challenging terrains using a single forward-facing depth camera. Due to the partial observability of the problem, the robot has to rely on past observations to infer the terrain currently beneath it. To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric memory architecture that explicitly accounts for the SE(3) equivariance of the 3D world. NVM aggregates feature volumes from multiple camera views by first bringing them back to the ego-centric frame of the robot. We test the learned visual-locomotion policy on a physical robot and show that our approach, which explicitly introduces geometric priors during training, offers superior performance than more na\"ive methods. We also include ablation studies and show that the representations stored in the neural volumetric memory capture sufficient geometric information to reconstruct the scene. Our project page with videos is https://rchalyang.github.io/NVM .
[ { "created": "Mon, 3 Apr 2023 17:59:56 GMT", "version": "v1" } ]
2023-04-04
[ [ "Yang", "Ruihan", "" ], [ "Yang", "Ge", "" ], [ "Wang", "Xiaolong", "" ] ]
Legged robots have the potential to expand the reach of autonomy beyond paved roads. In this work, we consider the difficult problem of locomotion on challenging terrains using a single forward-facing depth camera. Due to the partial observability of the problem, the robot has to rely on past observations to infer the terrain currently beneath it. To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric memory architecture that explicitly accounts for the SE(3) equivariance of the 3D world. NVM aggregates feature volumes from multiple camera views by first bringing them back to the ego-centric frame of the robot. We test the learned visual-locomotion policy on a physical robot and show that our approach, which explicitly introduces geometric priors during training, offers superior performance than more na\"ive methods. We also include ablation studies and show that the representations stored in the neural volumetric memory capture sufficient geometric information to reconstruct the scene. Our project page with videos is https://rchalyang.github.io/NVM .
2209.07660
Joshua Ott
Joshua Ott, Edward Balaban, Mykel J. Kochenderfer
Sequential Bayesian Optimization for Adaptive Informative Path Planning with Multimodal Sensing
null
null
null
null
cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Adaptive Informative Path Planning with Multimodal Sensing (AIPPMS) considers the problem of an agent equipped with multiple sensors, each with different sensing accuracy and energy costs. The agent's goal is to explore the environment and gather information subject to its resource constraints in unknown, partially observable environments. Previous work has focused on the less general Adaptive Informative Path Planning (AIPP) problem, which considers only the effect of the agent's movement on received observations. The AIPPMS problem adds additional complexity by requiring that the agent reasons jointly about the effects of sensing and movement while balancing resource constraints with information objectives. We formulate the AIPPMS problem as a belief Markov decision process with Gaussian process beliefs and solve it using a sequential Bayesian optimization approach with online planning. Our approach consistently outperforms previous AIPPMS solutions by more than doubling the average reward received in almost every experiment while also reducing the root-mean-square error in the environment belief by 50%. We completely open-source our implementation to aid in further development and comparison.
[ { "created": "Fri, 16 Sep 2022 00:50:36 GMT", "version": "v1" } ]
2022-09-19
[ [ "Ott", "Joshua", "" ], [ "Balaban", "Edward", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
Adaptive Informative Path Planning with Multimodal Sensing (AIPPMS) considers the problem of an agent equipped with multiple sensors, each with different sensing accuracy and energy costs. The agent's goal is to explore the environment and gather information subject to its resource constraints in unknown, partially observable environments. Previous work has focused on the less general Adaptive Informative Path Planning (AIPP) problem, which considers only the effect of the agent's movement on received observations. The AIPPMS problem adds additional complexity by requiring that the agent reasons jointly about the effects of sensing and movement while balancing resource constraints with information objectives. We formulate the AIPPMS problem as a belief Markov decision process with Gaussian process beliefs and solve it using a sequential Bayesian optimization approach with online planning. Our approach consistently outperforms previous AIPPMS solutions by more than doubling the average reward received in almost every experiment while also reducing the root-mean-square error in the environment belief by 50%. We completely open-source our implementation to aid in further development and comparison.
2009.06218
Jing Ji
Fanglan Zheng, Erihe, Kun Li, Jiang Tian, Xiaojia Xiang
A Vertical Federated Learning Method for Interpretable Scorecard and Its Application in Credit Scoring
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the success of big data and artificial intelligence in many fields, the applications of big data driven models are expected in financial risk management especially credit scoring and rating. Under the premise of data privacy protection, we propose a projected gradient-based method in the vertical federated learning framework for the traditional scorecard, which is based on logistic regression with bounded constraints, namely FL-LRBC. The latter enables multiple agencies to jointly train an optimized scorecard model in a single training session. It leads to the formation of the model with positive coefficients, while the time-consuming parameter-tuning process can be avoided. Moreover, the performance in terms of both AUC and the Kolmogorov-Smirnov (KS) statistics is significantly improved due to data enrichment using FL-LRBC. At present, FL-LRBC has already been applied to credit business in a China nation-wide financial holdings group.
[ { "created": "Mon, 14 Sep 2020 06:26:09 GMT", "version": "v1" } ]
2020-09-15
[ [ "Zheng", "Fanglan", "" ], [ "Erihe", "", "" ], [ "Li", "Kun", "" ], [ "Tian", "Jiang", "" ], [ "Xiang", "Xiaojia", "" ] ]
With the success of big data and artificial intelligence in many fields, the applications of big data driven models are expected in financial risk management especially credit scoring and rating. Under the premise of data privacy protection, we propose a projected gradient-based method in the vertical federated learning framework for the traditional scorecard, which is based on logistic regression with bounded constraints, namely FL-LRBC. The latter enables multiple agencies to jointly train an optimized scorecard model in a single training session. It leads to the formation of the model with positive coefficients, while the time-consuming parameter-tuning process can be avoided. Moreover, the performance in terms of both AUC and the Kolmogorov-Smirnov (KS) statistics is significantly improved due to data enrichment using FL-LRBC. At present, FL-LRBC has already been applied to credit business in a China nation-wide financial holdings group.
2012.00465
Daniel Barath
Yaqing Ding, Daniel Barath, Zuzana Kukelova
Minimal Solutions for Panoramic Stitching Given Gravity Prior
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
When capturing panoramas, people tend to align their cameras with the vertical axis, i.e., the direction of gravity. Moreover, modern devices, such as smartphones and tablets, are equipped with an IMU (Inertial Measurement Unit) that can measure the gravity vector accurately. Using this prior, the y-axes of the cameras can be aligned or assumed to be already aligned, reducing their relative orientation to 1-DOF (degree of freedom). Exploiting this assumption, we propose new minimal solutions to panoramic image stitching of images taken by cameras with coinciding optical centers, i.e., undergoing pure rotation. We consider four practical camera configurations, assuming unknown fixed or varying focal length with or without radial distortion. The solvers are tested both on synthetic scenes and on more than 500k real image pairs from the Sun360 dataset and from scenes captured by us using two smartphones equipped with IMUs. It is shown, that they outperform the state-of-the-art both in terms of accuracy and processing time.
[ { "created": "Tue, 1 Dec 2020 13:17:36 GMT", "version": "v1" } ]
2020-12-02
[ [ "Ding", "Yaqing", "" ], [ "Barath", "Daniel", "" ], [ "Kukelova", "Zuzana", "" ] ]
When capturing panoramas, people tend to align their cameras with the vertical axis, i.e., the direction of gravity. Moreover, modern devices, such as smartphones and tablets, are equipped with an IMU (Inertial Measurement Unit) that can measure the gravity vector accurately. Using this prior, the y-axes of the cameras can be aligned or assumed to be already aligned, reducing their relative orientation to 1-DOF (degree of freedom). Exploiting this assumption, we propose new minimal solutions to panoramic image stitching of images taken by cameras with coinciding optical centers, i.e., undergoing pure rotation. We consider four practical camera configurations, assuming unknown fixed or varying focal length with or without radial distortion. The solvers are tested both on synthetic scenes and on more than 500k real image pairs from the Sun360 dataset and from scenes captured by us using two smartphones equipped with IMUs. It is shown, that they outperform the state-of-the-art both in terms of accuracy and processing time.
2207.12757
Chun-Mao Lai
Chun-Mao Lai, Ming-Hao Hsu, Chao-Wei Huang, Yun-Nung Chen
Controllable User Dialogue Act Augmentation for Dialogue State Tracking
9 pages, 4 figures, accepted to sigdial 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prior work has demonstrated that data augmentation is useful for improving dialogue state tracking. However, there are many types of user utterances, while the prior method only considered the simplest one for augmentation, raising the concern about poor generalization capability. In order to better cover diverse dialogue acts and control the generation quality, this paper proposes controllable user dialogue act augmentation (CUDA-DST) to augment user utterances with diverse behaviors. With the augmented data, different state trackers gain improvement and show better robustness, achieving the state-of-the-art performance on MultiWOZ 2.1
[ { "created": "Tue, 26 Jul 2022 09:04:48 GMT", "version": "v1" } ]
2022-07-27
[ [ "Lai", "Chun-Mao", "" ], [ "Hsu", "Ming-Hao", "" ], [ "Huang", "Chao-Wei", "" ], [ "Chen", "Yun-Nung", "" ] ]
Prior work has demonstrated that data augmentation is useful for improving dialogue state tracking. However, there are many types of user utterances, while the prior method only considered the simplest one for augmentation, raising the concern about poor generalization capability. In order to better cover diverse dialogue acts and control the generation quality, this paper proposes controllable user dialogue act augmentation (CUDA-DST) to augment user utterances with diverse behaviors. With the augmented data, different state trackers gain improvement and show better robustness, achieving the state-of-the-art performance on MultiWOZ 2.1
1911.09576
Gordon MacDonald
Gordon MacDonald and Andrew Godbout and Bryn Gillcash and Stephanie Cairns
Volume-preserving Neural Networks
20 pages, 8 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel approach to addressing the vanishing (or exploding) gradient problem in deep neural networks. We construct a new architecture for deep neural networks where all layers (except the output layer) of the network are a combination of rotation, permutation, diagonal, and activation sublayers which are all volume preserving. Our approach replaces the standard weight matrix of a neural network with a combination of diagonal, rotational and permutation matrices, all of which are volume-preserving. We introduce a coupled activation function allowing us to preserve volume even in the activation function portion of a neural network layer. This control on the volume forces the gradient (on average) to maintain equilibrium and not explode or vanish. To demonstrate our architecture we apply our volume-preserving neural network model to two standard datasets.
[ { "created": "Thu, 21 Nov 2019 16:10:41 GMT", "version": "v1" }, { "created": "Fri, 22 Nov 2019 17:29:50 GMT", "version": "v2" }, { "created": "Mon, 26 Apr 2021 16:05:32 GMT", "version": "v3" } ]
2021-04-27
[ [ "MacDonald", "Gordon", "" ], [ "Godbout", "Andrew", "" ], [ "Gillcash", "Bryn", "" ], [ "Cairns", "Stephanie", "" ] ]
We propose a novel approach to addressing the vanishing (or exploding) gradient problem in deep neural networks. We construct a new architecture for deep neural networks where all layers (except the output layer) of the network are a combination of rotation, permutation, diagonal, and activation sublayers which are all volume preserving. Our approach replaces the standard weight matrix of a neural network with a combination of diagonal, rotational and permutation matrices, all of which are volume-preserving. We introduce a coupled activation function allowing us to preserve volume even in the activation function portion of a neural network layer. This control on the volume forces the gradient (on average) to maintain equilibrium and not explode or vanish. To demonstrate our architecture we apply our volume-preserving neural network model to two standard datasets.
2403.06465
Jianxun Lian
Jianxun Lian, Yuxuan Lei, Xu Huang, Jing Yao, Wei Xu, Xing Xie
RecAI: Leveraging Large Language Models for Next-Generation Recommender Systems
4 pages. Webconf 2024 demo track
null
10.1145/3589335.3651242
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces RecAI, a practical toolkit designed to augment or even revolutionize recommender systems with the advanced capabilities of Large Language Models (LLMs). RecAI provides a suite of tools, including Recommender AI Agent, Recommendation-oriented Language Models, Knowledge Plugin, RecExplainer, and Evaluator, to facilitate the integration of LLMs into recommender systems from multifaceted perspectives. The new generation of recommender systems, empowered by LLMs, are expected to be more versatile, explainable, conversational, and controllable, paving the way for more intelligent and user-centric recommendation experiences. We hope the open-source of RecAI can help accelerate evolution of new advanced recommender systems. The source code of RecAI is available at \url{https://github.com/microsoft/RecAI}.
[ { "created": "Mon, 11 Mar 2024 07:07:02 GMT", "version": "v1" } ]
2024-03-12
[ [ "Lian", "Jianxun", "" ], [ "Lei", "Yuxuan", "" ], [ "Huang", "Xu", "" ], [ "Yao", "Jing", "" ], [ "Xu", "Wei", "" ], [ "Xie", "Xing", "" ] ]
This paper introduces RecAI, a practical toolkit designed to augment or even revolutionize recommender systems with the advanced capabilities of Large Language Models (LLMs). RecAI provides a suite of tools, including Recommender AI Agent, Recommendation-oriented Language Models, Knowledge Plugin, RecExplainer, and Evaluator, to facilitate the integration of LLMs into recommender systems from multifaceted perspectives. The new generation of recommender systems, empowered by LLMs, are expected to be more versatile, explainable, conversational, and controllable, paving the way for more intelligent and user-centric recommendation experiences. We hope the open-source of RecAI can help accelerate evolution of new advanced recommender systems. The source code of RecAI is available at \url{https://github.com/microsoft/RecAI}.
1610.04551
Rafael Hurtado
Jorge Useche and Rafael Hurtado
Tonal consonance parameters link microscopic and macroscopic properties of music exposing a hidden order in melody
11 pages, 7 figures. Supplemental material contains 3 figures and 3 tables. An spreadsheet .xlsx contains data, fitting parameters, determination coefficients, expected values, and Lagrange multipliers
null
null
null
cs.SD cs.IT math.IT physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consonance is related to the perception of pleasantness arising from a combination of sounds and has been approached quantitatively using mathematical relations, physics, information theory, and psychoacoustics. Tonal consonance is present in timbre, musical tuning, harmony, and melody, and it is used for conveying sensations, perceptions, and emotions in music. It involves the physical properties of sound waves and is used to study melody and harmony through musical intervals and chords. From the perspective of complexity, the macroscopic properties of a system with many parts frequently rely on the statistical properties of its constituent elements. Here we show how the tonal consonance parameters for complex tones can be used to study complexity in music. We apply this formalism to melody, showing that melodic lines in musical pieces can be described in terms of the physical properties of melodic intervals and the existence of an entropy extremalization principle subject to psychoacoustic macroscopic constraints with musical meaning. This result connects the human perception of consonance with the complexity of human creativity in music through the physical properties of the musical stimulus.
[ { "created": "Fri, 14 Oct 2016 17:42:41 GMT", "version": "v1" }, { "created": "Fri, 27 Jan 2017 19:11:05 GMT", "version": "v2" }, { "created": "Thu, 9 Feb 2017 19:07:40 GMT", "version": "v3" }, { "created": "Sun, 23 Apr 2017 16:31:08 GMT", "version": "v4" } ]
2017-04-25
[ [ "Useche", "Jorge", "" ], [ "Hurtado", "Rafael", "" ] ]
Consonance is related to the perception of pleasantness arising from a combination of sounds and has been approached quantitatively using mathematical relations, physics, information theory, and psychoacoustics. Tonal consonance is present in timbre, musical tuning, harmony, and melody, and it is used for conveying sensations, perceptions, and emotions in music. It involves the physical properties of sound waves and is used to study melody and harmony through musical intervals and chords. From the perspective of complexity, the macroscopic properties of a system with many parts frequently rely on the statistical properties of its constituent elements. Here we show how the tonal consonance parameters for complex tones can be used to study complexity in music. We apply this formalism to melody, showing that melodic lines in musical pieces can be described in terms of the physical properties of melodic intervals and the existence of an entropy extremalization principle subject to psychoacoustic macroscopic constraints with musical meaning. This result connects the human perception of consonance with the complexity of human creativity in music through the physical properties of the musical stimulus.
2306.04431
Tom Lamb
Tom A. Lamb, Rudy Brunel, Krishnamurthy DJ Dvijotham, M. Pawan Kumar, Philip H. S. Torr, Francisco Eiras
Faithful Knowledge Distillation
7pgs (main content), 4 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Knowledge distillation (KD) has received much attention due to its success in compressing networks to allow for their deployment in resource-constrained systems. While the problem of adversarial robustness has been studied before in the KD setting, previous works overlook what we term the relative calibration of the student network with respect to its teacher in terms of soft confidences. In particular, we focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples? These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting. To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences and provide empirical and certified methods to evaluate the relative calibration of a student w.r.t. its teacher. Further, to verifiably align the relative calibration incentives of the student to those of its teacher, we introduce faithful distillation. Our experiments on the MNIST, Fashion-MNIST and CIFAR-10 datasets demonstrate the need for such an analysis and the advantages of the increased verifiability of faithful distillation over alternative adversarial distillation methods.
[ { "created": "Wed, 7 Jun 2023 13:41:55 GMT", "version": "v1" }, { "created": "Thu, 8 Jun 2023 09:50:27 GMT", "version": "v2" }, { "created": "Fri, 11 Aug 2023 13:39:06 GMT", "version": "v3" } ]
2023-08-14
[ [ "Lamb", "Tom A.", "" ], [ "Brunel", "Rudy", "" ], [ "Dvijotham", "Krishnamurthy DJ", "" ], [ "Kumar", "M. Pawan", "" ], [ "Torr", "Philip H. S.", "" ], [ "Eiras", "Francisco", "" ] ]
Knowledge distillation (KD) has received much attention due to its success in compressing networks to allow for their deployment in resource-constrained systems. While the problem of adversarial robustness has been studied before in the KD setting, previous works overlook what we term the relative calibration of the student network with respect to its teacher in terms of soft confidences. In particular, we focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples? These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting. To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences and provide empirical and certified methods to evaluate the relative calibration of a student w.r.t. its teacher. Further, to verifiably align the relative calibration incentives of the student to those of its teacher, we introduce faithful distillation. Our experiments on the MNIST, Fashion-MNIST and CIFAR-10 datasets demonstrate the need for such an analysis and the advantages of the increased verifiability of faithful distillation over alternative adversarial distillation methods.
1910.05280
Masato Tamura
Masato Tamura, Tomokazu Murakami
Augmented Hard Example Mining for Generalizable Person Re-Identification
Submit to WACV2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although the performance of person re-identification (Re-ID) has been much improved by using sophisticated training methods and large-scale labelled datasets, many existing methods make the impractical assumption that information of a target domain can be utilized during training. In practice, a Re-ID system often starts running as soon as it is deployed, hence training with data from a target domain is unrealistic. To make Re-ID systems more practical, methods have been proposed that achieve high performance without information of a target domain. However, they need cumbersome tuning for training and unusual operations for testing. In this paper, we propose augmented hard example mining, which can be easily integrated to a common Re-ID training process and can utilize sophisticated models without any network modification. The method discovers hard examples on the basis of classification probabilities, and to make the examples harder, various types of augmentation are applied to the examples. Among those examples, excessively augmented ones are eliminated by a classification based selection process. Extensive analysis shows that our method successfully selects effective examples and achieves state-of-the-art performance on publicly available benchmark datasets.
[ { "created": "Fri, 11 Oct 2019 16:19:53 GMT", "version": "v1" } ]
2019-10-14
[ [ "Tamura", "Masato", "" ], [ "Murakami", "Tomokazu", "" ] ]
Although the performance of person re-identification (Re-ID) has been much improved by using sophisticated training methods and large-scale labelled datasets, many existing methods make the impractical assumption that information of a target domain can be utilized during training. In practice, a Re-ID system often starts running as soon as it is deployed, hence training with data from a target domain is unrealistic. To make Re-ID systems more practical, methods have been proposed that achieve high performance without information of a target domain. However, they need cumbersome tuning for training and unusual operations for testing. In this paper, we propose augmented hard example mining, which can be easily integrated to a common Re-ID training process and can utilize sophisticated models without any network modification. The method discovers hard examples on the basis of classification probabilities, and to make the examples harder, various types of augmentation are applied to the examples. Among those examples, excessively augmented ones are eliminated by a classification based selection process. Extensive analysis shows that our method successfully selects effective examples and achieves state-of-the-art performance on publicly available benchmark datasets.
1408.6520
Shirin Sohrabi
Shirin Sohrabi and Octavian Udrea and Anton V. Riabov
Knowledge Engineering for Planning-Based Hypothesis Generation
This paper appears in the Proceedings of the Automated Planning and Scheduling (ICAPS) Workshop on Knowledge Engineering for Planning and Scheduling (KEPS)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the knowledge engineering problems for hypothesis generation motivated by applications that require timely exploration of hypotheses under unreliable observations. We looked at two applications: malware detection and intensive care delivery. In intensive care, the goal is to generate plausible hypotheses about the condition of the patient from clinical observations and further refine these hypotheses to create a recovery plan for the patient. Similarly, preventing malware spread within a corporate network involves generating hypotheses from network traffic data and selecting preventive actions. To this end, building on the already established characterization and use of AI planning for similar problems, we propose use of planning for the hypothesis generation problem. However, to deal with uncertainty, incomplete model description and unreliable observations, we need to use a planner capable of generating multiple high-quality plans. To capture the model description we propose a language called LTS++ and a web-based tool that enables the specification of the LTS++ model and a set of observations. We also proposed a 9-step process that helps provide guidance to the domain expert in specifying the LTS++ model. The hypotheses are then generated by running a planner on the translated LTS++ model and the provided trace. The hypotheses can be visualized and shown to the analyst or can be further investigated automatically.
[ { "created": "Wed, 27 Aug 2014 15:14:11 GMT", "version": "v1" } ]
2014-08-29
[ [ "Sohrabi", "Shirin", "" ], [ "Udrea", "Octavian", "" ], [ "Riabov", "Anton V.", "" ] ]
In this paper, we address the knowledge engineering problems for hypothesis generation motivated by applications that require timely exploration of hypotheses under unreliable observations. We looked at two applications: malware detection and intensive care delivery. In intensive care, the goal is to generate plausible hypotheses about the condition of the patient from clinical observations and further refine these hypotheses to create a recovery plan for the patient. Similarly, preventing malware spread within a corporate network involves generating hypotheses from network traffic data and selecting preventive actions. To this end, building on the already established characterization and use of AI planning for similar problems, we propose use of planning for the hypothesis generation problem. However, to deal with uncertainty, incomplete model description and unreliable observations, we need to use a planner capable of generating multiple high-quality plans. To capture the model description we propose a language called LTS++ and a web-based tool that enables the specification of the LTS++ model and a set of observations. We also proposed a 9-step process that helps provide guidance to the domain expert in specifying the LTS++ model. The hypotheses are then generated by running a planner on the translated LTS++ model and the provided trace. The hypotheses can be visualized and shown to the analyst or can be further investigated automatically.
2101.09588
Xiaobin Xiong
Xiaobin Xiong and Aaron Ames
3D Underactuated Bipedal Walking via H-LIP based Gait Synthesis and Stepping Stabilization
20 pages, 24 figures. Paper under review, comments are sincerely welcome
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we holistically present a Hybrid-Linear Inverted Pendulum (H-LIP) based approach for synthesizing and stabilizing 3D foot-underactuated bipedal walking, with an emphasis on thorough hardware realization. The H-LIP is proposed to capture the essential components of the underactuated and actuated part of the robotic walking. The robot walking gait is then directly synthesized based on the H-LIP. We comprehensively characterize the periodic orbits of the H-LIP and provably derive the stepping stabilization via its step-to-step (S2S) dynamics, which is then utilized to approximate the S2S dynamics of the horizontal state of the center of mass (COM) of the robotic walking. The approximation facilities a H-LIP based stepping controller to provide desired step sizes to stabilize the robotic walking. By realizing the desired step sizes, the robot achieves dynamic and stable walking. The approach is fully evaluated in both simulation and experiment on the 3D underactuated bipedal robot Cassie, which demonstrates dynamic walking behaviors with both high versatility and robustness.
[ { "created": "Sat, 23 Jan 2021 21:28:04 GMT", "version": "v1" }, { "created": "Fri, 5 Feb 2021 22:35:25 GMT", "version": "v2" }, { "created": "Wed, 3 Nov 2021 00:24:10 GMT", "version": "v3" } ]
2021-11-04
[ [ "Xiong", "Xiaobin", "" ], [ "Ames", "Aaron", "" ] ]
In this paper, we holistically present a Hybrid-Linear Inverted Pendulum (H-LIP) based approach for synthesizing and stabilizing 3D foot-underactuated bipedal walking, with an emphasis on thorough hardware realization. The H-LIP is proposed to capture the essential components of the underactuated and actuated part of the robotic walking. The robot walking gait is then directly synthesized based on the H-LIP. We comprehensively characterize the periodic orbits of the H-LIP and provably derive the stepping stabilization via its step-to-step (S2S) dynamics, which is then utilized to approximate the S2S dynamics of the horizontal state of the center of mass (COM) of the robotic walking. The approximation facilities a H-LIP based stepping controller to provide desired step sizes to stabilize the robotic walking. By realizing the desired step sizes, the robot achieves dynamic and stable walking. The approach is fully evaluated in both simulation and experiment on the 3D underactuated bipedal robot Cassie, which demonstrates dynamic walking behaviors with both high versatility and robustness.
0805.4374
Jin Xu
Jin Xu, Yi Cao, and Biao Chen
Capacity Bounds for Broadcast Channels with Confidential Messages
27 pages, 1 figure, submitted to IEEE Transaction on Information Theory
null
10.1109/TIT.2009.2027500
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study capacity bounds for discrete memoryless broadcast channels with confidential messages. Two private messages as well as a common message are transmitted; the common message is to be decoded by both receivers, while each private message is only for its intended receiver. In addition, each private message is to be kept secret from the unintended receiver where secrecy is measured by equivocation. We propose both inner and outer bounds to the rate equivocation region for broadcast channels with confidential messages. The proposed inner bound generalizes Csisz\'{a}r and K\"{o}rner's rate equivocation region for broadcast channels with a single confidential message, Liu {\em et al}'s achievable rate region for broadcast channels with perfect secrecy, Marton's and Gel'fand and Pinsker's achievable rate region for general broadcast channels. Our proposed outer bounds, together with the inner bound, helps establish the rate equivocation region of several classes of discrete memoryless broadcast channels with confidential messages, including less noisy, deterministic, and semi-deterministic channels. Furthermore, specializing to the general broadcast channel by removing the confidentiality constraint, our proposed outer bounds reduce to new capacity outer bounds for the discrete memory broadcast channel.
[ { "created": "Wed, 28 May 2008 15:36:46 GMT", "version": "v1" } ]
2016-11-17
[ [ "Xu", "Jin", "" ], [ "Cao", "Yi", "" ], [ "Chen", "Biao", "" ] ]
In this paper, we study capacity bounds for discrete memoryless broadcast channels with confidential messages. Two private messages as well as a common message are transmitted; the common message is to be decoded by both receivers, while each private message is only for its intended receiver. In addition, each private message is to be kept secret from the unintended receiver where secrecy is measured by equivocation. We propose both inner and outer bounds to the rate equivocation region for broadcast channels with confidential messages. The proposed inner bound generalizes Csisz\'{a}r and K\"{o}rner's rate equivocation region for broadcast channels with a single confidential message, Liu {\em et al}'s achievable rate region for broadcast channels with perfect secrecy, Marton's and Gel'fand and Pinsker's achievable rate region for general broadcast channels. Our proposed outer bounds, together with the inner bound, helps establish the rate equivocation region of several classes of discrete memoryless broadcast channels with confidential messages, including less noisy, deterministic, and semi-deterministic channels. Furthermore, specializing to the general broadcast channel by removing the confidentiality constraint, our proposed outer bounds reduce to new capacity outer bounds for the discrete memory broadcast channel.
1812.05816
Songyang Zhang
Songyang Zhang, Weimin Lei, Wei Zhang, Yunchong Guan
Shared Bottleneck Detecction Based on Trend Line Regression for Multipath Transmission
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current deployed multipath congestion control algorithms couple all the subflows together to avoid bandwidth occupation aggressiveness if the subflows of multipath transmission protocol share common bottleneck with single path TCP. The coupled congestion control algorithms can guarantee well fairness property in common bottleneck but result in rate increase conservativeness in none-sharing bottleneck situation. Thus, the throughput of multipath session can be further improved when combing with effective shared bottleneck detection mechanism. This paper proposes a delay trend line regression method to detect if flows share common bottleneck. Deduced from TCP fluid model, the packet round trip delay signal shows linear increase property during the queue building up process of the narrowest link and the delay trend line slopes of two flows are in close proximity if they traverse the same bottleneck link. The proposed method is implemented on multipath QUIC golang codebase and extensive simulations are performed to validate its effectiveness in detecting out flows traversing common bottleneck. If the subflows are detected out via a common bottleneck, the sender would perform coupled congestion control algorithm and perform congestion control seperately on flow level in none sharing bottleneck case. Results show a multipath session with two subflows can obtain 74\% gain on average in throughput compared with single path connection when Linked Increases Algorithm (LIA) is in combination with trend line regession shared bottlenck detection algorithm in none shared bottleneck, and show well fairness property in common bottleneck scenarios.
[ { "created": "Fri, 14 Dec 2018 08:19:39 GMT", "version": "v1" } ]
2018-12-17
[ [ "Zhang", "Songyang", "" ], [ "Lei", "Weimin", "" ], [ "Zhang", "Wei", "" ], [ "Guan", "Yunchong", "" ] ]
The current deployed multipath congestion control algorithms couple all the subflows together to avoid bandwidth occupation aggressiveness if the subflows of multipath transmission protocol share common bottleneck with single path TCP. The coupled congestion control algorithms can guarantee well fairness property in common bottleneck but result in rate increase conservativeness in none-sharing bottleneck situation. Thus, the throughput of multipath session can be further improved when combing with effective shared bottleneck detection mechanism. This paper proposes a delay trend line regression method to detect if flows share common bottleneck. Deduced from TCP fluid model, the packet round trip delay signal shows linear increase property during the queue building up process of the narrowest link and the delay trend line slopes of two flows are in close proximity if they traverse the same bottleneck link. The proposed method is implemented on multipath QUIC golang codebase and extensive simulations are performed to validate its effectiveness in detecting out flows traversing common bottleneck. If the subflows are detected out via a common bottleneck, the sender would perform coupled congestion control algorithm and perform congestion control seperately on flow level in none sharing bottleneck case. Results show a multipath session with two subflows can obtain 74\% gain on average in throughput compared with single path connection when Linked Increases Algorithm (LIA) is in combination with trend line regession shared bottlenck detection algorithm in none shared bottleneck, and show well fairness property in common bottleneck scenarios.
2306.06238
Harvey Dam
Harvey Dam, Vinu Joseph, Aditya Bhaskara, Ganesh Gopalakrishnan, Saurav Muralidharan, Michael Garland
Understanding the Effect of the Long Tail on Neural Network Compression
null
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Network compression is now a mature sub-field of neural network research: over the last decade, significant progress has been made towards reducing the size of models and speeding up inference, while maintaining the classification accuracy. However, many works have observed that focusing on just the overall accuracy can be misguided. E.g., it has been shown that mismatches between the full and compressed models can be biased towards under-represented classes. This raises the important research question, can we achieve network compression while maintaining "semantic equivalence" with the original network? In this work, we study this question in the context of the "long tail" phenomenon in computer vision datasets observed by Feldman, et al. They argue that memorization of certain inputs (appropriately defined) is essential to achieving good generalization. As compression limits the capacity of a network (and hence also its ability to memorize), we study the question: are mismatches between the full and compressed models correlated with the memorized training data? We present positive evidence in this direction for image classification tasks, by considering different base architectures and compression schemes.
[ { "created": "Fri, 9 Jun 2023 20:18:05 GMT", "version": "v1" }, { "created": "Mon, 19 Jun 2023 09:46:09 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 23:14:16 GMT", "version": "v3" } ]
2023-06-29
[ [ "Dam", "Harvey", "" ], [ "Joseph", "Vinu", "" ], [ "Bhaskara", "Aditya", "" ], [ "Gopalakrishnan", "Ganesh", "" ], [ "Muralidharan", "Saurav", "" ], [ "Garland", "Michael", "" ] ]
Network compression is now a mature sub-field of neural network research: over the last decade, significant progress has been made towards reducing the size of models and speeding up inference, while maintaining the classification accuracy. However, many works have observed that focusing on just the overall accuracy can be misguided. E.g., it has been shown that mismatches between the full and compressed models can be biased towards under-represented classes. This raises the important research question, can we achieve network compression while maintaining "semantic equivalence" with the original network? In this work, we study this question in the context of the "long tail" phenomenon in computer vision datasets observed by Feldman, et al. They argue that memorization of certain inputs (appropriately defined) is essential to achieving good generalization. As compression limits the capacity of a network (and hence also its ability to memorize), we study the question: are mismatches between the full and compressed models correlated with the memorized training data? We present positive evidence in this direction for image classification tasks, by considering different base architectures and compression schemes.
1806.03483
Chengyuan Zhang
Chengyuan Zhang, Ruipeng Chen, Lei Zhu, Anfeng Liu, Yunwu Lin and Fang Huang
Hierarchical Information Quadtree: Efficient Spatial Temporal Image Search for Multimedia Stream
Published at Multimedia Tools and Applications. arXiv admin note: text overlap with arXiv:1805.02009
null
10.1007/s11042-018-6284-y
null
cs.MM cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive amount of multimedia data that contain times- tamps and geographical information are being generated at an unprecedented scale in many emerging applications such as photo sharing web site and social networks applications. Due to their importance, a large body of work has focused on efficiently computing various spatial image queries. In this paper,we study the spatial temporal image query which considers three important constraints during the search including time recency, spatial proximity and visual relevance. A novel index structure, namely Hierarchical Information Quadtree(\hiq), to efficiently insert/delete spatial temporal images with high arrive rates. Base on \hiq an efficient algorithm is developed to support spatial temporal image query. We show via extensive experimentation with real spatial databases clearly demonstrate the efficiency of our methods.
[ { "created": "Sat, 9 Jun 2018 14:53:07 GMT", "version": "v1" }, { "created": "Wed, 15 Aug 2018 07:53:15 GMT", "version": "v2" } ]
2018-08-16
[ [ "Zhang", "Chengyuan", "" ], [ "Chen", "Ruipeng", "" ], [ "Zhu", "Lei", "" ], [ "Liu", "Anfeng", "" ], [ "Lin", "Yunwu", "" ], [ "Huang", "Fang", "" ] ]
Massive amount of multimedia data that contain times- tamps and geographical information are being generated at an unprecedented scale in many emerging applications such as photo sharing web site and social networks applications. Due to their importance, a large body of work has focused on efficiently computing various spatial image queries. In this paper,we study the spatial temporal image query which considers three important constraints during the search including time recency, spatial proximity and visual relevance. A novel index structure, namely Hierarchical Information Quadtree(\hiq), to efficiently insert/delete spatial temporal images with high arrive rates. Base on \hiq an efficient algorithm is developed to support spatial temporal image query. We show via extensive experimentation with real spatial databases clearly demonstrate the efficiency of our methods.
1604.07095
Xiaoxiao Guo
Xiaoxiao Guo, Satinder Singh, Richard Lewis and Honglak Lee
Deep Learning for Reward Design to Improve Monte Carlo Tree Search in ATARI Games
In 25th International Joint Conference on Artificial Intelligence (IJCAI), 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD's performance. The new method improves UCT's performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.
[ { "created": "Sun, 24 Apr 2016 23:51:18 GMT", "version": "v1" } ]
2016-04-26
[ [ "Guo", "Xiaoxiao", "" ], [ "Singh", "Satinder", "" ], [ "Lewis", "Richard", "" ], [ "Lee", "Honglak", "" ] ]
Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for reward-design) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD's performance. The new method improves UCT's performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.
2011.07964
Martin Hole\v{c}ek
Martin Hole\v{c}ek
Learning from similarity and information extraction from structured documents
17 pages, 9 figures, manuscript for the IJDAR journal special issue for ICDAR conference
Hole\v{c}ek, M. 2021 Learning from similarity and information extraction from structured documents; International Journal on Document Analysis and Recognition (IJDAR) 2021/06/11
10.1007/s10032-021-00375-3
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The automation of document processing is gaining recent attention due to the great potential to reduce manual work through improved methods and hardware. Neural networks have been successfully applied before - even though they have been trained only on relatively small datasets with hundreds of documents so far. To successfully explore deep learning techniques and improve the information extraction results, a dataset with more than twenty-five thousand documents has been compiled, anonymized and is published as a part of this work. We will expand our previous work where we proved that convolutions, graph convolutions and self-attention can work together and exploit all the information present in a structured document. Taking the fully trainable method one step further, we will now design and examine various approaches to using siamese networks, concepts of similarity, one-shot learning and context/memory awareness. The aim is to improve micro F1 of per-word classification on the huge real-world document dataset. The results verify the hypothesis that trainable access to a similar (yet still different) page together with its already known target information improves the information extraction. Furthermore, the experiments confirm that all proposed architecture parts are all required to beat the previous results. The best model improves the previous state-of-the-art results by an 8.25 gain in F1 score. Qualitative analysis is provided to verify that the new model performs better for all target classes. Additionally, multiple structural observations about the causes of the underperformance of some architectures are revealed. All the source codes, parameters and implementation details are published together with the dataset in the hope to push the research boundaries since all the techniques used in this work are not problem-specific and can be generalized for other tasks and contexts.
[ { "created": "Sat, 17 Oct 2020 21:34:52 GMT", "version": "v1" }, { "created": "Sat, 13 Mar 2021 21:36:56 GMT", "version": "v2" } ]
2021-06-15
[ [ "Holeček", "Martin", "" ] ]
The automation of document processing is gaining recent attention due to the great potential to reduce manual work through improved methods and hardware. Neural networks have been successfully applied before - even though they have been trained only on relatively small datasets with hundreds of documents so far. To successfully explore deep learning techniques and improve the information extraction results, a dataset with more than twenty-five thousand documents has been compiled, anonymized and is published as a part of this work. We will expand our previous work where we proved that convolutions, graph convolutions and self-attention can work together and exploit all the information present in a structured document. Taking the fully trainable method one step further, we will now design and examine various approaches to using siamese networks, concepts of similarity, one-shot learning and context/memory awareness. The aim is to improve micro F1 of per-word classification on the huge real-world document dataset. The results verify the hypothesis that trainable access to a similar (yet still different) page together with its already known target information improves the information extraction. Furthermore, the experiments confirm that all proposed architecture parts are all required to beat the previous results. The best model improves the previous state-of-the-art results by an 8.25 gain in F1 score. Qualitative analysis is provided to verify that the new model performs better for all target classes. Additionally, multiple structural observations about the causes of the underperformance of some architectures are revealed. All the source codes, parameters and implementation details are published together with the dataset in the hope to push the research boundaries since all the techniques used in this work are not problem-specific and can be generalized for other tasks and contexts.
2307.09036
Yingchaojie Feng
Yingchaojie Feng, Xingbo Wang, Kam Kwai Wong, Sijia Wang, Yuhong Lu, Minfeng Zhu, Baicheng Wang, Wei Chen
PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation
Accepted full paper for IEEE VIS 2023
null
10.1109/TVCG.2023.3327168
null
cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.
[ { "created": "Tue, 18 Jul 2023 07:46:25 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 09:44:57 GMT", "version": "v2" } ]
2023-11-02
[ [ "Feng", "Yingchaojie", "" ], [ "Wang", "Xingbo", "" ], [ "Wong", "Kam Kwai", "" ], [ "Wang", "Sijia", "" ], [ "Lu", "Yuhong", "" ], [ "Zhu", "Minfeng", "" ], [ "Wang", "Baicheng", "" ], [ "Chen", "Wei", "" ] ]
Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.
2209.06692
Lingyu Du
Lingyu Du, Guohao Lan
FreeGaze: Resource-efficient Gaze Estimation via Frequency Domain Contrastive Learning
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaze estimation is of great importance to many scientific fields and daily applications, ranging from fundamental research in cognitive psychology to attention-aware mobile systems. While recent advancements in deep learning have yielded remarkable successes in building highly accurate gaze estimation systems, the associated high computational cost and the reliance on large-scale labeled gaze data for supervised learning place challenges on the practical use of existing solutions. To move beyond these limitations, we present FreeGaze, a resource-efficient framework for unsupervised gaze representation learning. FreeGaze incorporates the frequency domain gaze estimation and the contrastive gaze representation learning in its design. The former significantly alleviates the computational burden in both system calibration and gaze estimation, and dramatically reduces the system latency; while the latter overcomes the data labeling hurdle of existing supervised learning-based counterparts, and ensures efficient gaze representation learning in the absence of gaze label. Our evaluation on two gaze estimation datasets shows that FreeGaze can achieve comparable gaze estimation accuracy with existing supervised learning-based approach, while enabling up to 6.81 and 1.67 times speedup in system calibration and gaze estimation, respectively.
[ { "created": "Wed, 14 Sep 2022 14:51:52 GMT", "version": "v1" } ]
2022-09-15
[ [ "Du", "Lingyu", "" ], [ "Lan", "Guohao", "" ] ]
Gaze estimation is of great importance to many scientific fields and daily applications, ranging from fundamental research in cognitive psychology to attention-aware mobile systems. While recent advancements in deep learning have yielded remarkable successes in building highly accurate gaze estimation systems, the associated high computational cost and the reliance on large-scale labeled gaze data for supervised learning place challenges on the practical use of existing solutions. To move beyond these limitations, we present FreeGaze, a resource-efficient framework for unsupervised gaze representation learning. FreeGaze incorporates the frequency domain gaze estimation and the contrastive gaze representation learning in its design. The former significantly alleviates the computational burden in both system calibration and gaze estimation, and dramatically reduces the system latency; while the latter overcomes the data labeling hurdle of existing supervised learning-based counterparts, and ensures efficient gaze representation learning in the absence of gaze label. Our evaluation on two gaze estimation datasets shows that FreeGaze can achieve comparable gaze estimation accuracy with existing supervised learning-based approach, while enabling up to 6.81 and 1.67 times speedup in system calibration and gaze estimation, respectively.
2011.10134
Quanquan Gu
Dongruo Zhou and Jiahao Chen and Quanquan Gu
Provable Multi-Objective Reinforcement Learning with Generative Models
10 pages, Workshop on Real-World Reinforcement Learning at the 34th Conference on Neural Information ProcessingSystems (NeurIPS 2020), Vancouver, Canada
null
null
null
cs.LG cs.AI math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-objective reinforcement learning (MORL) is an extension of ordinary, single-objective reinforcement learning (RL) that is applicable to many real-world tasks where multiple objectives exist without known relative costs. We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives. Existing methods require strong assumptions such as exact knowledge of the multi-objective Markov decision process, and are analyzed in the limit of infinite data and time. We propose a new algorithm called model-based envelop value iteration (EVI), which generalizes the enveloped multi-objective $Q$-learning algorithm in Yang et al., 2019. Our method can learn a near-optimal value function with polynomial sample complexity and linear convergence speed. To the best of our knowledge, this is the first finite-sample analysis of MORL algorithms.
[ { "created": "Thu, 19 Nov 2020 22:35:31 GMT", "version": "v1" }, { "created": "Mon, 11 Jan 2021 07:28:13 GMT", "version": "v2" } ]
2021-01-12
[ [ "Zhou", "Dongruo", "" ], [ "Chen", "Jiahao", "" ], [ "Gu", "Quanquan", "" ] ]
Multi-objective reinforcement learning (MORL) is an extension of ordinary, single-objective reinforcement learning (RL) that is applicable to many real-world tasks where multiple objectives exist without known relative costs. We study the problem of single policy MORL, which learns an optimal policy given the preference of objectives. Existing methods require strong assumptions such as exact knowledge of the multi-objective Markov decision process, and are analyzed in the limit of infinite data and time. We propose a new algorithm called model-based envelop value iteration (EVI), which generalizes the enveloped multi-objective $Q$-learning algorithm in Yang et al., 2019. Our method can learn a near-optimal value function with polynomial sample complexity and linear convergence speed. To the best of our knowledge, this is the first finite-sample analysis of MORL algorithms.
1301.3369
Yuichiro Fujiwara
Yuichiro Fujiwara
Self-synchronizing pulse position modulation with error tolerance
11 pages, 1 figure, 3 tables. Final accepted version for publication in the IEEE Transactions on Information Theory. This version incorporates minor corrections and some improvements including additional explicit examples, performance comparisons, and a discussion on symbol error rates for use in an FSO link
IEEE Transactions on Information Theory 59 (2013) 5352-5362
10.1109/TIT.2013.2262094
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pulse position modulation (PPM) is a popular signal modulation technique which creates M-ary data by means of the position of a pulse within a time interval. While PPM and its variations have great advantages in many contexts, this type of modulation is vulnerable to loss of synchronization, potentially causing a severe error floor or throughput penalty even when little or no noise is assumed. Another disadvantage is that this type of modulation typically offers no error correction mechanism on its own, making them sensitive to intersymbol interference and environmental noise. In this paper we propose a coding theoretic variation of PPM that allows for significantly more efficient symbol and frame synchronization as well as strong error correction. The proposed scheme can be divided into a synchronization layer and a modulation layer. This makes our technique compatible with major existing techniques such as standard PPM, multipluse PPM, and expurgated PPM as well in that the scheme can be realized by adding a simple synchronization layer to one of these standard techniques. We also develop a generalization of expurgated PPM suited for the modulation layer of the proposed self-synchronizing modulation scheme. This generalized PPM can also be used as stand-alone error-correcting PPM with a larger number of available symbols.
[ { "created": "Tue, 15 Jan 2013 14:51:12 GMT", "version": "v1" }, { "created": "Sun, 5 May 2013 03:19:36 GMT", "version": "v2" } ]
2013-08-20
[ [ "Fujiwara", "Yuichiro", "" ] ]
Pulse position modulation (PPM) is a popular signal modulation technique which creates M-ary data by means of the position of a pulse within a time interval. While PPM and its variations have great advantages in many contexts, this type of modulation is vulnerable to loss of synchronization, potentially causing a severe error floor or throughput penalty even when little or no noise is assumed. Another disadvantage is that this type of modulation typically offers no error correction mechanism on its own, making them sensitive to intersymbol interference and environmental noise. In this paper we propose a coding theoretic variation of PPM that allows for significantly more efficient symbol and frame synchronization as well as strong error correction. The proposed scheme can be divided into a synchronization layer and a modulation layer. This makes our technique compatible with major existing techniques such as standard PPM, multipluse PPM, and expurgated PPM as well in that the scheme can be realized by adding a simple synchronization layer to one of these standard techniques. We also develop a generalization of expurgated PPM suited for the modulation layer of the proposed self-synchronizing modulation scheme. This generalized PPM can also be used as stand-alone error-correcting PPM with a larger number of available symbols.
2002.05915
Jinglian He
Jinglian He, Kaiqiang Yu and Yuanming Shi
Coordinated Passive Beamforming for Distributed Intelligent Reflecting Surfaces Network
Accepted by Pro. IEEE Veh. Technol. Conf. (VTC), Antwerp, Belgium, May 2020
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent reflecting surface (IRS) is a proposing technology in 6G to enhance the performance of wireless networks by smartly reconfiguring the propagation environment with a large number of passive reflecting elements. However, current works mainly focus on single IRS-empowered wireless networks, where the channel rank deficiency problem has emerged. In this paper, we propose a distributed IRS-empowered communication network architecture, where multiple source-destination pairs communicate through multiple distributed IRSs. We further contribute to maximize the achievable sum-rates in this network via jointly optimizing the transmit power vector at the sources and the phase shift matrix with passive beamforming at all distributed IRSs. Unfortunately, this problem turns out to be non-convex and highly intractable, for which an alternating approach is developed via solving the resulting fractional programming problems alternatively. In particular, the closed-form expressions are proposed for coordinated passive beamforming at IRSs. The numerical results will demonstrate the algorithmic advantages and desirable performances of the distributed IRS-empowered communication network.
[ { "created": "Fri, 14 Feb 2020 08:31:27 GMT", "version": "v1" } ]
2020-02-17
[ [ "He", "Jinglian", "" ], [ "Yu", "Kaiqiang", "" ], [ "Shi", "Yuanming", "" ] ]
Intelligent reflecting surface (IRS) is a proposing technology in 6G to enhance the performance of wireless networks by smartly reconfiguring the propagation environment with a large number of passive reflecting elements. However, current works mainly focus on single IRS-empowered wireless networks, where the channel rank deficiency problem has emerged. In this paper, we propose a distributed IRS-empowered communication network architecture, where multiple source-destination pairs communicate through multiple distributed IRSs. We further contribute to maximize the achievable sum-rates in this network via jointly optimizing the transmit power vector at the sources and the phase shift matrix with passive beamforming at all distributed IRSs. Unfortunately, this problem turns out to be non-convex and highly intractable, for which an alternating approach is developed via solving the resulting fractional programming problems alternatively. In particular, the closed-form expressions are proposed for coordinated passive beamforming at IRSs. The numerical results will demonstrate the algorithmic advantages and desirable performances of the distributed IRS-empowered communication network.
1508.02082
Binjie Benjamin Lim
Benjamin Lim
Vulnerability Analysis of GWireless
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless networking has become very popular in recent years due to the increase in adoption of mobile devices. As more and more employees demand for Wi-Fi access for their devices, more companies have been jumping onto the "Bring Your Own Device" (BYOD) bandwagon[1] to appease their employees. One such example of an enterprise wireless infrastructure is the George Washington University's GWireless. For this project, I will attempt to capture hashes of authentication credentials from users who are connecting to the GWireless network using what is commonly known as the "evil twin" attack. I will document the hardware, software used and steps taken to configure the devices. I will then evaluate the feasibility of such an attack, explore variations of the attack and document measures that can be taken to prevent such an attack.
[ { "created": "Sun, 9 Aug 2015 20:45:50 GMT", "version": "v1" } ]
2015-08-11
[ [ "Lim", "Benjamin", "" ] ]
Wireless networking has become very popular in recent years due to the increase in adoption of mobile devices. As more and more employees demand for Wi-Fi access for their devices, more companies have been jumping onto the "Bring Your Own Device" (BYOD) bandwagon[1] to appease their employees. One such example of an enterprise wireless infrastructure is the George Washington University's GWireless. For this project, I will attempt to capture hashes of authentication credentials from users who are connecting to the GWireless network using what is commonly known as the "evil twin" attack. I will document the hardware, software used and steps taken to configure the devices. I will then evaluate the feasibility of such an attack, explore variations of the attack and document measures that can be taken to prevent such an attack.
2209.00084
Sudeep Pasricha
Febin Sunny, Mahdi Nikdast, Sudeep Pasricha
RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon Photonics
null
null
null
null
cs.LG cs.AR cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recurrent Neural Networks (RNNs) are used in applications that learn dependencies in data sequences, such as speech recognition, human activity recognition, and anomaly detection. In recent years, newer RNN variants, such as GRUs and LSTMs, have been used for implementing these applications. As many of these applications are employed in real-time scenarios, accelerating RNN/LSTM/GRU inference is crucial. In this paper, we propose a novel photonic hardware accelerator called RecLight for accelerating simple RNNs, GRUs, and LSTMs. Simulation results indicate that RecLight achieves 37x lower energy-per-bit and 10% better throughput compared to the state-of-the-art.
[ { "created": "Wed, 31 Aug 2022 19:36:01 GMT", "version": "v1" } ]
2022-09-02
[ [ "Sunny", "Febin", "" ], [ "Nikdast", "Mahdi", "" ], [ "Pasricha", "Sudeep", "" ] ]
Recurrent Neural Networks (RNNs) are used in applications that learn dependencies in data sequences, such as speech recognition, human activity recognition, and anomaly detection. In recent years, newer RNN variants, such as GRUs and LSTMs, have been used for implementing these applications. As many of these applications are employed in real-time scenarios, accelerating RNN/LSTM/GRU inference is crucial. In this paper, we propose a novel photonic hardware accelerator called RecLight for accelerating simple RNNs, GRUs, and LSTMs. Simulation results indicate that RecLight achieves 37x lower energy-per-bit and 10% better throughput compared to the state-of-the-art.
cs/0206028
Wolfgang Eiden
Wolfgang Eiden
Knowledge management for enterprises (Wissensmanagement fuer Unternehmen)
published in January 2000, 22 pages, 13 figures, german
null
null
null
cs.IR cs.AI
null
Although knowledge is one of the most valuable resource of enterprises and an important production and competition factor, this intellectual potential is often used (or maintained) only inadequate by the enterprises. Therefore, in a globalised and growing market the optimal usage of existing knowledge represents a key factor for enterprises of the future. Here, knowledge management systems should engage facilitating. Because geographically far distributed establishments cause, however, a distributed system, this paper should uncover the spectrum connected with it and present a possible basic approach which is based on ontologies and modern, platform independent technologies. Last but not least this attempt, as well as general questions of the knowledge management, are discussed.
[ { "created": "Wed, 19 Jun 2002 22:13:41 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2002 00:09:30 GMT", "version": "v2" } ]
2016-08-31
[ [ "Eiden", "Wolfgang", "" ] ]
Although knowledge is one of the most valuable resource of enterprises and an important production and competition factor, this intellectual potential is often used (or maintained) only inadequate by the enterprises. Therefore, in a globalised and growing market the optimal usage of existing knowledge represents a key factor for enterprises of the future. Here, knowledge management systems should engage facilitating. Because geographically far distributed establishments cause, however, a distributed system, this paper should uncover the spectrum connected with it and present a possible basic approach which is based on ontologies and modern, platform independent technologies. Last but not least this attempt, as well as general questions of the knowledge management, are discussed.
2304.05008
Dongqi Han
Dongqi Han, Kenji Doya, Dongsheng Li, Jun Tani
Habits and goals in synergy: a variational Bayesian framework for behavior
null
Nat Commun 15, 4461 (2024)
10.1038/s41467-024-48577-7
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
How to behave efficiently and flexibly is a central problem for understanding biological agents and creating intelligent embodied AI. It has been well known that behavior can be classified as two types: reward-maximizing habitual behavior, which is fast while inflexible; and goal-directed behavior, which is flexible while slow. Conventionally, habitual and goal-directed behaviors are considered handled by two distinct systems in the brain. Here, we propose to bridge the gap between the two behaviors, drawing on the principles of variational Bayesian theory. We incorporate both behaviors in one framework by introducing a Bayesian latent variable called "intention". The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal. Building on this idea, we present a novel Bayesian framework for modeling behaviors. Our proposed framework enables skill sharing between the two kinds of behaviors, and by leveraging the idea of predictive coding, it enables an agent to seamlessly generalize from habitual to goal-directed behavior without requiring additional training. The proposed framework suggests a fresh perspective for cognitive science and embodied AI, highlighting the potential for greater integration between habitual and goal-directed behaviors.
[ { "created": "Tue, 11 Apr 2023 06:28:14 GMT", "version": "v1" } ]
2024-07-09
[ [ "Han", "Dongqi", "" ], [ "Doya", "Kenji", "" ], [ "Li", "Dongsheng", "" ], [ "Tani", "Jun", "" ] ]
How to behave efficiently and flexibly is a central problem for understanding biological agents and creating intelligent embodied AI. It has been well known that behavior can be classified as two types: reward-maximizing habitual behavior, which is fast while inflexible; and goal-directed behavior, which is flexible while slow. Conventionally, habitual and goal-directed behaviors are considered handled by two distinct systems in the brain. Here, we propose to bridge the gap between the two behaviors, drawing on the principles of variational Bayesian theory. We incorporate both behaviors in one framework by introducing a Bayesian latent variable called "intention". The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal. Building on this idea, we present a novel Bayesian framework for modeling behaviors. Our proposed framework enables skill sharing between the two kinds of behaviors, and by leveraging the idea of predictive coding, it enables an agent to seamlessly generalize from habitual to goal-directed behavior without requiring additional training. The proposed framework suggests a fresh perspective for cognitive science and embodied AI, highlighting the potential for greater integration between habitual and goal-directed behaviors.
2008.11459
Xin Kong
Xin Kong, Xuemeng Yang, Guangyao Zhai, Xiangrui Zhao, Xianfang Zeng, Mengmeng Wang, Yong Liu, Wanlong Li, Feng Wen
Semantic Graph Based Place Recognition for 3D Point Clouds
8 pages. Accpeted by IROS-2020
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the difficulty in generating the effective descriptors which are robust to occlusion and viewpoint changes, place recognition for 3D point cloud remains an open issue. Unlike most of the existing methods that focus on extracting local, global, and statistical features of raw point clouds, our method aims at the semantic level that can be superior in terms of robustness to environmental changes. Inspired by the perspective of humans, who recognize scenes through identifying semantic objects and capturing their relations, this paper presents a novel semantic graph based approach for place recognition. First, we propose a novel semantic graph representation for the point cloud scenes by reserving the semantic and topological information of the raw point cloud. Thus, place recognition is modeled as a graph matching problem. Then we design a fast and effective graph similarity network to compute the similarity. Exhaustive evaluations on the KITTI dataset show that our approach is robust to the occlusion as well as viewpoint changes and outperforms the state-of-the-art methods with a large margin. Our code is available at: \url{https://github.com/kxhit/SG_PR}.
[ { "created": "Wed, 26 Aug 2020 09:27:26 GMT", "version": "v1" } ]
2020-08-27
[ [ "Kong", "Xin", "" ], [ "Yang", "Xuemeng", "" ], [ "Zhai", "Guangyao", "" ], [ "Zhao", "Xiangrui", "" ], [ "Zeng", "Xianfang", "" ], [ "Wang", "Mengmeng", "" ], [ "Liu", "Yong", "" ], [ "Li", "Wanlong", "" ], [ "Wen", "Feng", "" ] ]
Due to the difficulty in generating the effective descriptors which are robust to occlusion and viewpoint changes, place recognition for 3D point cloud remains an open issue. Unlike most of the existing methods that focus on extracting local, global, and statistical features of raw point clouds, our method aims at the semantic level that can be superior in terms of robustness to environmental changes. Inspired by the perspective of humans, who recognize scenes through identifying semantic objects and capturing their relations, this paper presents a novel semantic graph based approach for place recognition. First, we propose a novel semantic graph representation for the point cloud scenes by reserving the semantic and topological information of the raw point cloud. Thus, place recognition is modeled as a graph matching problem. Then we design a fast and effective graph similarity network to compute the similarity. Exhaustive evaluations on the KITTI dataset show that our approach is robust to the occlusion as well as viewpoint changes and outperforms the state-of-the-art methods with a large margin. Our code is available at: \url{https://github.com/kxhit/SG_PR}.
2206.14452
Yiqi Deng
Yiqi Deng and Siu Ming Yiu
Deep Multiple Instance Learning For Forecasting Stock Trends Using Financial News
17 pages, 4 figures
null
null
null
cs.LG q-fin.CP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major source of information can be taken from financial news articles, which have some correlations about the fluctuation of stock trends. In this paper, we investigate the influences of financial news on the stock trends, from a multi-instance view. The intuition behind this is based on the news uncertainty of varying intervals of news occurrences and the lack of annotation in every single financial news. Under the scenario of Multiple Instance Learning (MIL) where training instances are arranged in bags, and a label is assigned for the entire bag instead of instances, we develop a flexible and adaptive multi-instance learning model and evaluate its ability in directional movement forecast of Standard & Poors 500 index on financial news dataset. Specifically, we treat each trading day as one bag, with certain amounts of news happening on each trading day as instances in each bag. Experiment results demonstrate that our proposed multi-instance-based framework gains outstanding results in terms of the accuracy of trend prediction, compared with other state-of-art approaches and baselines.
[ { "created": "Wed, 29 Jun 2022 08:00:13 GMT", "version": "v1" } ]
2022-06-30
[ [ "Deng", "Yiqi", "" ], [ "Yiu", "Siu Ming", "" ] ]
A major source of information can be taken from financial news articles, which have some correlations about the fluctuation of stock trends. In this paper, we investigate the influences of financial news on the stock trends, from a multi-instance view. The intuition behind this is based on the news uncertainty of varying intervals of news occurrences and the lack of annotation in every single financial news. Under the scenario of Multiple Instance Learning (MIL) where training instances are arranged in bags, and a label is assigned for the entire bag instead of instances, we develop a flexible and adaptive multi-instance learning model and evaluate its ability in directional movement forecast of Standard & Poors 500 index on financial news dataset. Specifically, we treat each trading day as one bag, with certain amounts of news happening on each trading day as instances in each bag. Experiment results demonstrate that our proposed multi-instance-based framework gains outstanding results in terms of the accuracy of trend prediction, compared with other state-of-art approaches and baselines.
2208.01721
Aresh Dadlani
Sarina Jami, Iman Sahebi, Mohammad M. Sabermahani, Seyed P. Shariatpanahi, Aresh Dadlani, Behrouz Maham
Rumor Stance Classification in Online Social Networks: The State-of-the-Art, Prospects, and Future Challenges
16 pages, 3 figures, journal
null
null
null
cs.SI cs.NI
http://creativecommons.org/licenses/by/4.0/
The emergence of the Internet as a ubiquitous technology has facilitated the rapid evolution of social media as the leading virtual platform for communication, content sharing, and information dissemination. In spite of revolutionizing the way news is delivered to people, this technology has also brought along with itself inevitable demerits. One such drawback is the spread of rumors expedited by social media platforms, which may provoke doubt and fear. Therefore, it is essential to debunk rumors before their widespread use. Over the years, many studies have been conducted to develop effective rumor verification systems. One aspect of such studies focuses on rumor stance classification, which involves the task of utilizing user viewpoints regarding a rumorous post to better predict the veracity of a rumor. Relying on user stances in rumor verification has gained significant importance, for it has resulted in significant improvements in the model performance. In this paper, we conduct a comprehensive literature review of rumor stance classification in complex online social networks (OSNs). In particular, we present a thorough description of these approaches and compare their performances. Moreover, we introduce multiple datasets available for this purpose and highlight their limitations. Finally, challenges and future directions are discussed to stimulate further relevant research efforts.
[ { "created": "Tue, 2 Aug 2022 20:07:49 GMT", "version": "v1" }, { "created": "Mon, 31 Oct 2022 19:37:10 GMT", "version": "v2" } ]
2022-11-02
[ [ "Jami", "Sarina", "" ], [ "Sahebi", "Iman", "" ], [ "Sabermahani", "Mohammad M.", "" ], [ "Shariatpanahi", "Seyed P.", "" ], [ "Dadlani", "Aresh", "" ], [ "Maham", "Behrouz", "" ] ]
The emergence of the Internet as a ubiquitous technology has facilitated the rapid evolution of social media as the leading virtual platform for communication, content sharing, and information dissemination. In spite of revolutionizing the way news is delivered to people, this technology has also brought along with itself inevitable demerits. One such drawback is the spread of rumors expedited by social media platforms, which may provoke doubt and fear. Therefore, it is essential to debunk rumors before their widespread use. Over the years, many studies have been conducted to develop effective rumor verification systems. One aspect of such studies focuses on rumor stance classification, which involves the task of utilizing user viewpoints regarding a rumorous post to better predict the veracity of a rumor. Relying on user stances in rumor verification has gained significant importance, for it has resulted in significant improvements in the model performance. In this paper, we conduct a comprehensive literature review of rumor stance classification in complex online social networks (OSNs). In particular, we present a thorough description of these approaches and compare their performances. Moreover, we introduce multiple datasets available for this purpose and highlight their limitations. Finally, challenges and future directions are discussed to stimulate further relevant research efforts.
1808.09772
Antoine Tixier
Antoine J.-P. Tixier
Notes on Deep Learning for NLP
work in progress
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
My notes on Deep Learning for NLP.
[ { "created": "Wed, 29 Aug 2018 12:58:45 GMT", "version": "v1" }, { "created": "Thu, 30 Aug 2018 17:44:54 GMT", "version": "v2" } ]
2018-08-31
[ [ "Tixier", "Antoine J. -P.", "" ] ]
My notes on Deep Learning for NLP.
1804.03230
Tien-Ju Yang
Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, Hartwig Adam
NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
Accepted by ECCV 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7$\times$ speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).
[ { "created": "Mon, 9 Apr 2018 20:45:26 GMT", "version": "v1" }, { "created": "Fri, 28 Sep 2018 19:20:16 GMT", "version": "v2" } ]
2018-10-02
[ [ "Yang", "Tien-Ju", "" ], [ "Howard", "Andrew", "" ], [ "Chen", "Bo", "" ], [ "Zhang", "Xiao", "" ], [ "Go", "Alec", "" ], [ "Sandler", "Mark", "" ], [ "Sze", "Vivienne", "" ], [ "Adam", "Hartwig", "" ] ]
This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7$\times$ speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).
2304.00057
Khandaker Foysal Haque
Khandaker Foysal Haque, Milin Zhang, Francesco Restuccia
SiMWiSense: Simultaneous Multi-Subject Activity Classification Through Wi-Fi Signals
This work has been accepted for publication in IEEE WoWMoM 2023
null
10.1109/WoWMoM57956.2023.00019
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve simultaneous, multi-subject activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1TB dataset and code repository.
[ { "created": "Fri, 31 Mar 2023 18:19:23 GMT", "version": "v1" } ]
2023-09-11
[ [ "Haque", "Khandaker Foysal", "" ], [ "Zhang", "Milin", "" ], [ "Restuccia", "Francesco", "" ] ]
Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve simultaneous, multi-subject activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1TB dataset and code repository.
1710.02192
Juan Castorena
Juan Castorena and Siddharth Agarwal
Ground Edge based LIDAR Localization without a Reflectivity Calibration for Autonomous Driving
null
null
10.1109/LRA.2017.2748180
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose an alternative formulation to the problem of ground reflectivity grid based localization involving laser scanned data from multiple LIDARs mounted on autonomous vehicles. The driving idea of our localization formulation is an alternative edge reflectivity grid representation which is invariant to laser source, angle of incidence, range and robot surveying motion. Such property eliminates the need of the post-factory reflectivity calibration whose time requirements are infeasible in mass produced robots/vehicles. Our experiments demonstrate that we can achieve better performance than state of the art on ground reflectivity inference-map based localization at no additional computational burden.
[ { "created": "Thu, 5 Oct 2017 19:56:32 GMT", "version": "v1" } ]
2017-10-09
[ [ "Castorena", "Juan", "" ], [ "Agarwal", "Siddharth", "" ] ]
In this work we propose an alternative formulation to the problem of ground reflectivity grid based localization involving laser scanned data from multiple LIDARs mounted on autonomous vehicles. The driving idea of our localization formulation is an alternative edge reflectivity grid representation which is invariant to laser source, angle of incidence, range and robot surveying motion. Such property eliminates the need of the post-factory reflectivity calibration whose time requirements are infeasible in mass produced robots/vehicles. Our experiments demonstrate that we can achieve better performance than state of the art on ground reflectivity inference-map based localization at no additional computational burden.
2304.05687
Rock Yuren Pang
Rock Yuren Pang, Katharina Reinecke
Anticipating Unintended Consequences of Technology Using Insights from Creativity Support Tools
In CHI '23 Workshop on Designing Technology and Policy Simultaneously: Towards A Research Agenda and New Practice, April 23, 2023
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Our society has been increasingly witnessing a number of negative, unintended consequences of digital technologies. While post-hoc policy regulation is crucial in addressing these issues, reasonably anticipating the consequences before deploying technology can help mitigate potential harm to society in the first place. Yet, the quest to anticipate potential harms can be difficult without seeing digital technologies deployed in the real world. In this position paper, we argue that anticipating unintended consequences of technology can be facilitated through creativity-enhancing interventions, such as by building on existing knowledge and insights from diverse stakeholders. Using lessons learned from prior work on creativity-support tools, the HCI community is uniquely equipped to design novel systems that aid in anticipating negative unintended consequences of technology on society.
[ { "created": "Wed, 12 Apr 2023 08:25:22 GMT", "version": "v1" } ]
2023-04-13
[ [ "Pang", "Rock Yuren", "" ], [ "Reinecke", "Katharina", "" ] ]
Our society has been increasingly witnessing a number of negative, unintended consequences of digital technologies. While post-hoc policy regulation is crucial in addressing these issues, reasonably anticipating the consequences before deploying technology can help mitigate potential harm to society in the first place. Yet, the quest to anticipate potential harms can be difficult without seeing digital technologies deployed in the real world. In this position paper, we argue that anticipating unintended consequences of technology can be facilitated through creativity-enhancing interventions, such as by building on existing knowledge and insights from diverse stakeholders. Using lessons learned from prior work on creativity-support tools, the HCI community is uniquely equipped to design novel systems that aid in anticipating negative unintended consequences of technology on society.
1602.05181
Arindam Biswas
Arindam Biswas
A Simple Condition for the Existence of Transversals
null
null
null
null
cs.DM math.CO
http://creativecommons.org/licenses/by-sa/4.0/
Hall's Theorem is a basic result in Combinatorics which states that the obvious necesssary condition for a finite family of sets to have a transversal is also sufficient. We present a sufficient (but not necessary) condition on the sizes of the sets in the family and the sizes of their intersections so that a transversal exists. Using this, we prove that in a bipartite graph $G$ (bipartition $\{A, B\}$), without 4-cycles, if $\deg(v) \geq \sqrt{2e|A|}$ for all $v \in A$, then $G$ has a matching of size $|A|$.
[ { "created": "Tue, 16 Feb 2016 20:54:24 GMT", "version": "v1" } ]
2016-02-17
[ [ "Biswas", "Arindam", "" ] ]
Hall's Theorem is a basic result in Combinatorics which states that the obvious necesssary condition for a finite family of sets to have a transversal is also sufficient. We present a sufficient (but not necessary) condition on the sizes of the sets in the family and the sizes of their intersections so that a transversal exists. Using this, we prove that in a bipartite graph $G$ (bipartition $\{A, B\}$), without 4-cycles, if $\deg(v) \geq \sqrt{2e|A|}$ for all $v \in A$, then $G$ has a matching of size $|A|$.
1910.00294
Yunsu Kim
Yunsu Kim, Duc Thanh Tran, Hermann Ney
When and Why is Document-level Context Useful in Neural Machine Translation?
DiscoMT 2019 camera-ready
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.
[ { "created": "Tue, 1 Oct 2019 10:40:26 GMT", "version": "v1" } ]
2019-10-02
[ [ "Kim", "Yunsu", "" ], [ "Tran", "Duc Thanh", "" ], [ "Ney", "Hermann", "" ] ]
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.
2303.10631
Peter Kostol\'anyi
Peter Kostol\'anyi
Bideterministic Weighted Automata
This is an extended version of an article published in the proceedings of the conference CAI 2022
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A finite automaton is called bideterministic if it is both deterministic and codeterministic -- that is, if it is deterministic and its transpose is deterministic as well. The study of such automata in a weighted setting is initiated. All trim bideterministic weighted automata over integral domains and over positive semirings are proved to be minimal. On the contrary, it is observed that this property does not hold over commutative rings in general: non-minimal trim bideterministic weighted automata do exist over all semirings that are not zero-divisor free, and over many such semirings, these automata might not even admit equivalents that are both minimal and bideterministic. The problem of determining whether a given rational series is realised by a bideterministic automaton is shown to be decidable over fields and over tropical semirings. An example of a positive semiring over which this problem becomes undecidable is given as well.
[ { "created": "Sun, 19 Mar 2023 11:22:34 GMT", "version": "v1" }, { "created": "Wed, 17 May 2023 15:24:39 GMT", "version": "v2" }, { "created": "Fri, 29 Sep 2023 14:08:04 GMT", "version": "v3" } ]
2023-10-02
[ [ "Kostolányi", "Peter", "" ] ]
A finite automaton is called bideterministic if it is both deterministic and codeterministic -- that is, if it is deterministic and its transpose is deterministic as well. The study of such automata in a weighted setting is initiated. All trim bideterministic weighted automata over integral domains and over positive semirings are proved to be minimal. On the contrary, it is observed that this property does not hold over commutative rings in general: non-minimal trim bideterministic weighted automata do exist over all semirings that are not zero-divisor free, and over many such semirings, these automata might not even admit equivalents that are both minimal and bideterministic. The problem of determining whether a given rational series is realised by a bideterministic automaton is shown to be decidable over fields and over tropical semirings. An example of a positive semiring over which this problem becomes undecidable is given as well.
2404.03899
Jinwook Kim
Hyunyoung Jang, Jinwook Kim, Jeongmi Lee
Effects of Multisensory Feedback on the Perception and Performance of Virtual Reality Hand-Retargeted Interaction
17 pages, 8 figures, 2 tables
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Retargeting methods that modify the visual representation of real movements have been widely used to expand the interaction space and create engaging virtual reality experiences. For optimal user experience and performance, it is essential to specify the perception of retargeting and utilize the appropriate range of modification parameters. However, previous studies mostly concentrated on whether users perceived the target sense or not and rarely examined the perceptual accuracy and sensitivity to retargeting. Moreover, it is unknown how the perception and performance in hand-retargeted interactions are influenced by multisensory feedback. In this study, we used rigorous psychophysical methods to specify users' perceptual accuracy and sensitivity to hand-retargeting and provide acceptable ranges of retargeting parameters. We also presented different multisensory feedback simultaneously with the retargeting to probe its effect on users' perception and task performance. The experimental results showed that providing continuous multisensory feedback, proportionate to the distance between the virtual hand and the targeted destination, heightened the accuracy of users' perception of hand retargeting without altering their perceptual sensitivity. Furthermore, the utilization of multisensory feedback considerably improved the precision of task performance, particularly at lower gain factors. Based on these findings, we propose design guidelines and potential applications of VR hand-retargeted interactions and multisensory feedback for optimal user experience and performance.
[ { "created": "Fri, 5 Apr 2024 05:44:11 GMT", "version": "v1" } ]
2024-04-08
[ [ "Jang", "Hyunyoung", "" ], [ "Kim", "Jinwook", "" ], [ "Lee", "Jeongmi", "" ] ]
Retargeting methods that modify the visual representation of real movements have been widely used to expand the interaction space and create engaging virtual reality experiences. For optimal user experience and performance, it is essential to specify the perception of retargeting and utilize the appropriate range of modification parameters. However, previous studies mostly concentrated on whether users perceived the target sense or not and rarely examined the perceptual accuracy and sensitivity to retargeting. Moreover, it is unknown how the perception and performance in hand-retargeted interactions are influenced by multisensory feedback. In this study, we used rigorous psychophysical methods to specify users' perceptual accuracy and sensitivity to hand-retargeting and provide acceptable ranges of retargeting parameters. We also presented different multisensory feedback simultaneously with the retargeting to probe its effect on users' perception and task performance. The experimental results showed that providing continuous multisensory feedback, proportionate to the distance between the virtual hand and the targeted destination, heightened the accuracy of users' perception of hand retargeting without altering their perceptual sensitivity. Furthermore, the utilization of multisensory feedback considerably improved the precision of task performance, particularly at lower gain factors. Based on these findings, we propose design guidelines and potential applications of VR hand-retargeted interactions and multisensory feedback for optimal user experience and performance.
1703.07611
Georgios Zogopoulos-Papaliakos
Georgios Zogopoulos-Papaliakos, Kostas J. Kyriakopoulos
On the Selection of Calculable Residual Generators for UAV Fault Diagnosis
null
null
10.1109/MED.2016.7536003
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structural Analysis is an established method for Fault Detection and Identification (FDI) in large-scale systems, enabling the discovery of Analytical Redundancy Relations (ARRs) which serve as residual generators. However, most techniques used to enumerate ARRs do not specify the matching used to calculate each of those ARRs. This can result in non-implementable or unusable residual generators, in the presence of non-invertibilities in the equations involved or in lack of computational tools. In this paper, we propose a methodology which combines a priori and a posteriori information in order to reduce the time required to find implementable, usable residual generators of minimum cost. The method is applied to a fixed-wing Unmanned Aerial Vehicle (UAV) model.
[ { "created": "Wed, 22 Mar 2017 12:01:09 GMT", "version": "v1" } ]
2017-03-23
[ [ "Zogopoulos-Papaliakos", "Georgios", "" ], [ "Kyriakopoulos", "Kostas J.", "" ] ]
Structural Analysis is an established method for Fault Detection and Identification (FDI) in large-scale systems, enabling the discovery of Analytical Redundancy Relations (ARRs) which serve as residual generators. However, most techniques used to enumerate ARRs do not specify the matching used to calculate each of those ARRs. This can result in non-implementable or unusable residual generators, in the presence of non-invertibilities in the equations involved or in lack of computational tools. In this paper, we propose a methodology which combines a priori and a posteriori information in order to reduce the time required to find implementable, usable residual generators of minimum cost. The method is applied to a fixed-wing Unmanned Aerial Vehicle (UAV) model.
1805.04625
Shun Watanabe
Himanshu Tyagi, Shun Watanabe
Strong Converse using Change of Measure Arguments
35 pages, no figure; v2 updated references
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The strong converse for a coding theorem shows that the optimal asymptotic rate possible with vanishing error cannot be improved by allowing a fixed error. Building on a method introduced by Gu and Effros for centralized coding problems, we develop a general and simple recipe for proving strong converse that is applicable for distributed problems as well. Heuristically, our proof of strong converse mimics the standard steps for proving a weak converse, except that we apply those steps to a modified distribution obtained by conditioning the original distribution on the event that no error occurs. A key component of our recipe is the replacement of the hard Markov constraints implied by the distributed nature of the problem with a soft information cost using a variational formula introduced by Oohama. We illustrate our method by providing a short proof of the strong converse for the Wyner-Ziv problem and strong converse theorems for interactive function computation, common randomness and secret key agreement, and the wiretap channel; the latter three strong converse problems were open prior to this work.
[ { "created": "Sat, 12 May 2018 00:34:37 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 14:13:36 GMT", "version": "v2" } ]
2019-08-22
[ [ "Tyagi", "Himanshu", "" ], [ "Watanabe", "Shun", "" ] ]
The strong converse for a coding theorem shows that the optimal asymptotic rate possible with vanishing error cannot be improved by allowing a fixed error. Building on a method introduced by Gu and Effros for centralized coding problems, we develop a general and simple recipe for proving strong converse that is applicable for distributed problems as well. Heuristically, our proof of strong converse mimics the standard steps for proving a weak converse, except that we apply those steps to a modified distribution obtained by conditioning the original distribution on the event that no error occurs. A key component of our recipe is the replacement of the hard Markov constraints implied by the distributed nature of the problem with a soft information cost using a variational formula introduced by Oohama. We illustrate our method by providing a short proof of the strong converse for the Wyner-Ziv problem and strong converse theorems for interactive function computation, common randomness and secret key agreement, and the wiretap channel; the latter three strong converse problems were open prior to this work.
1101.3220
Andreas Schenk
Andreas Schenk and Robert F.H. Fischer
Decision-Feedback Differential Detection in Impulse-Radio Ultra-Wideband Systems
Preprint of manuscript accepted for presentation in "IEEE Transactions on Communications"
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present decision-feedback differential detection (DF-DD) schemes for autocorrelation-based detection in impulse-radio ultra-wideband (IR-UWB) systems, a signaling scheme regarded as a promising candidate in particular for low-complexity wireless sensor networks. To this end, we first discuss ideal noncoherent sequence estimation and approximations thereof based on block-wise multiple-symbol differential detection (MSDD) and the Viterbi algorithm (VA) from the perspective of tree-search/trellis decoding. Exploiting relations well-known from tree-search decoding, we are able to derive the novel decision-feedback differential detection (DF-DD) schemes. A comprehensive comparison with respect to performance and complexity of the presented schemes in a typical IR-UWB scenario reveals---along with novel insights in techniques for complexity reduction of the sphere decoder applied for MSDD---that sorted DF-DD achieves close-to-optimum performance at very low, and in particular constant receiver complexity.
[ { "created": "Mon, 17 Jan 2011 14:08:26 GMT", "version": "v1" } ]
2011-01-18
[ [ "Schenk", "Andreas", "" ], [ "Fischer", "Robert F. H.", "" ] ]
In this paper we present decision-feedback differential detection (DF-DD) schemes for autocorrelation-based detection in impulse-radio ultra-wideband (IR-UWB) systems, a signaling scheme regarded as a promising candidate in particular for low-complexity wireless sensor networks. To this end, we first discuss ideal noncoherent sequence estimation and approximations thereof based on block-wise multiple-symbol differential detection (MSDD) and the Viterbi algorithm (VA) from the perspective of tree-search/trellis decoding. Exploiting relations well-known from tree-search decoding, we are able to derive the novel decision-feedback differential detection (DF-DD) schemes. A comprehensive comparison with respect to performance and complexity of the presented schemes in a typical IR-UWB scenario reveals---along with novel insights in techniques for complexity reduction of the sphere decoder applied for MSDD---that sorted DF-DD achieves close-to-optimum performance at very low, and in particular constant receiver complexity.
1901.01255
Tolga Birdal
Tolga Birdal and Benjamin Busam and Nassir Navab and Slobodan Ilic and Peter Sturm
Generic Primitive Detection in Point Clouds Using Novel Minimal Quadric Fits
Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI). arXiv admin note: substantial text overlap with arXiv:1803.07191
null
null
null
cs.CV cs.CG cs.GR cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel and effective method for detecting 3D primitives in cluttered, unorganized point clouds, without axillary segmentation or type specification. We consider the quadric surfaces for encapsulating the basic building blocks of our environments - planes, spheres, ellipsoids, cones or cylinders, in a unified fashion. Moreover, quadrics allow us to model higher degree of freedom shapes, such as hyperboloids or paraboloids that could be used in non-rigid settings. We begin by contributing two novel quadric fits targeting 3D point sets that are endowed with tangent space information. Based upon the idea of aligning the quadric gradients with the surface normals, our first formulation is exact and requires as low as four oriented points. The second fit approximates the first, and reduces the computational effort. We theoretically analyze these fits with rigor, and give algebraic and geometric arguments. Next, by re-parameterizing the solution, we devise a new local Hough voting scheme on the null-space coefficients that is combined with RANSAC, reducing the complexity from $O(N^4)$ to $O(N^3)$ (three points). To the best of our knowledge, this is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes without segmentation. Our extensive qualitative and quantitative results show that our method is efficient and flexible, as well as being accurate.
[ { "created": "Fri, 4 Jan 2019 12:09:50 GMT", "version": "v1" } ]
2019-01-08
[ [ "Birdal", "Tolga", "" ], [ "Busam", "Benjamin", "" ], [ "Navab", "Nassir", "" ], [ "Ilic", "Slobodan", "" ], [ "Sturm", "Peter", "" ] ]
We present a novel and effective method for detecting 3D primitives in cluttered, unorganized point clouds, without axillary segmentation or type specification. We consider the quadric surfaces for encapsulating the basic building blocks of our environments - planes, spheres, ellipsoids, cones or cylinders, in a unified fashion. Moreover, quadrics allow us to model higher degree of freedom shapes, such as hyperboloids or paraboloids that could be used in non-rigid settings. We begin by contributing two novel quadric fits targeting 3D point sets that are endowed with tangent space information. Based upon the idea of aligning the quadric gradients with the surface normals, our first formulation is exact and requires as low as four oriented points. The second fit approximates the first, and reduces the computational effort. We theoretically analyze these fits with rigor, and give algebraic and geometric arguments. Next, by re-parameterizing the solution, we devise a new local Hough voting scheme on the null-space coefficients that is combined with RANSAC, reducing the complexity from $O(N^4)$ to $O(N^3)$ (three points). To the best of our knowledge, this is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes without segmentation. Our extensive qualitative and quantitative results show that our method is efficient and flexible, as well as being accurate.
2407.11902
Qi Li
Qi Li, Runpeng Yu, Xinchao Wang
Encapsulating Knowledge in One Prompt
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paradigm encapsulates knowledge from various models into a solitary prompt without altering the original models or requiring access to the training data, which enables us to achieve efficient and convenient knowledge transfer in more realistic scenarios. From a practicality standpoint, this paradigm not only for the first time proves the effectiveness of Visual Prompt in data inaccessible contexts, but also solves the problems of low model reusability and high storage resource consumption faced by traditional Data-Free Knowledge Transfer, which means that we can realize the parallel knowledge transfer of multiple models without modifying any source model. Extensive experiments across various datasets and models demonstrate the efficacy of the proposed KiOP knowledge transfer paradigm. Without access to real training data and with rigorous storage capacity constraints, it is also capable of yielding considerable outcomes when dealing with cross-model backbone setups and handling parallel knowledge transfer processing requests with multiple (more than 2) models.
[ { "created": "Tue, 16 Jul 2024 16:35:23 GMT", "version": "v1" } ]
2024-07-17
[ [ "Li", "Qi", "" ], [ "Yu", "Runpeng", "" ], [ "Wang", "Xinchao", "" ] ]
This paradigm encapsulates knowledge from various models into a solitary prompt without altering the original models or requiring access to the training data, which enables us to achieve efficient and convenient knowledge transfer in more realistic scenarios. From a practicality standpoint, this paradigm not only for the first time proves the effectiveness of Visual Prompt in data inaccessible contexts, but also solves the problems of low model reusability and high storage resource consumption faced by traditional Data-Free Knowledge Transfer, which means that we can realize the parallel knowledge transfer of multiple models without modifying any source model. Extensive experiments across various datasets and models demonstrate the efficacy of the proposed KiOP knowledge transfer paradigm. Without access to real training data and with rigorous storage capacity constraints, it is also capable of yielding considerable outcomes when dealing with cross-model backbone setups and handling parallel knowledge transfer processing requests with multiple (more than 2) models.
2212.11726
David Kuric
David Kuric, Herke van Hoof
Reusable Options through Gradient-based Meta Learning
Published in Transactions on Machine Learning Research (TMLR)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hierarchical methods in reinforcement learning have the potential to reduce the amount of decisions that the agent needs to perform when learning new tasks. However, finding reusable useful temporal abstractions that facilitate fast learning remains a challenging problem. Recently, several deep learning approaches were proposed to learn such temporal abstractions in the form of options in an end-to-end manner. In this work, we point out several shortcomings of these methods and discuss their potential negative consequences. Subsequently, we formulate the desiderata for reusable options and use these to frame the problem of learning options as a gradient-based meta-learning problem. This allows us to formulate an objective that explicitly incentivizes options which allow a higher-level decision maker to adjust in few steps to different tasks. Experimentally, we show that our method is able to learn transferable components which accelerate learning and performs better than existing prior methods developed for this setting. Additionally, we perform ablations to quantify the impact of using gradient-based meta-learning as well as other proposed changes.
[ { "created": "Thu, 22 Dec 2022 14:19:35 GMT", "version": "v1" }, { "created": "Tue, 4 Apr 2023 10:46:54 GMT", "version": "v2" } ]
2023-04-05
[ [ "Kuric", "David", "" ], [ "van Hoof", "Herke", "" ] ]
Hierarchical methods in reinforcement learning have the potential to reduce the amount of decisions that the agent needs to perform when learning new tasks. However, finding reusable useful temporal abstractions that facilitate fast learning remains a challenging problem. Recently, several deep learning approaches were proposed to learn such temporal abstractions in the form of options in an end-to-end manner. In this work, we point out several shortcomings of these methods and discuss their potential negative consequences. Subsequently, we formulate the desiderata for reusable options and use these to frame the problem of learning options as a gradient-based meta-learning problem. This allows us to formulate an objective that explicitly incentivizes options which allow a higher-level decision maker to adjust in few steps to different tasks. Experimentally, we show that our method is able to learn transferable components which accelerate learning and performs better than existing prior methods developed for this setting. Additionally, we perform ablations to quantify the impact of using gradient-based meta-learning as well as other proposed changes.
1812.07129
Ashkan Ebadi
Ashkan Ebadi, Patrick J. Tighe, Lei Zhang, Parisa Rashidi
Does the Position of Surgical Service Providers in Intra-Operative Networks Matter? Analyzing the Impact of Influencing Factors on Patients' Outcome
17 pages, 3 Figures, 5 Tables PrePrint
null
null
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyzed the relation between surgical service providers' network structure and surgical team size with patient outcome during the operation. We did correlation analysis to evaluate the associations among the network structure measures in the intra-operative networks of surgical service providers. We focused on intra-operative networks of surgical service providers, in a quaternary-care academic medical center, using retrospective Electronic Medical Record (EMR) data. We used de-identified intra-operative data for adult patients who received nonambulatory/nonobstetric surgery in a main operating room at Shands at the University of Florida between June 1, 2011 and November 1, 2014. The intra-operative dataset contained 30,211 unique surgical cases. To perform the analysis, we created the networks of surgical service providers and calculated several network structure measures at both team and individual levels. We considered number of patients' complications as the target variable and assessed its interrelations with the calculated network measures along with other influencing factors (e.g. surgical team size, type of surgery). Our results confirm the significant role of interactions among surgical providers on patient outcome. In addition, we observed that highly central providers at the global network level are more likely to be associated with a lower number of surgical complications, while locally important providers might be associated with higher number of complications. We also found a positive relation between age of patients and number of complications.
[ { "created": "Tue, 18 Dec 2018 01:36:47 GMT", "version": "v1" } ]
2018-12-19
[ [ "Ebadi", "Ashkan", "" ], [ "Tighe", "Patrick J.", "" ], [ "Zhang", "Lei", "" ], [ "Rashidi", "Parisa", "" ] ]
We analyzed the relation between surgical service providers' network structure and surgical team size with patient outcome during the operation. We did correlation analysis to evaluate the associations among the network structure measures in the intra-operative networks of surgical service providers. We focused on intra-operative networks of surgical service providers, in a quaternary-care academic medical center, using retrospective Electronic Medical Record (EMR) data. We used de-identified intra-operative data for adult patients who received nonambulatory/nonobstetric surgery in a main operating room at Shands at the University of Florida between June 1, 2011 and November 1, 2014. The intra-operative dataset contained 30,211 unique surgical cases. To perform the analysis, we created the networks of surgical service providers and calculated several network structure measures at both team and individual levels. We considered number of patients' complications as the target variable and assessed its interrelations with the calculated network measures along with other influencing factors (e.g. surgical team size, type of surgery). Our results confirm the significant role of interactions among surgical providers on patient outcome. In addition, we observed that highly central providers at the global network level are more likely to be associated with a lower number of surgical complications, while locally important providers might be associated with higher number of complications. We also found a positive relation between age of patients and number of complications.
2107.03506
Szymon Talaga
Agnieszka Rychwalska, Szymon Talaga, Karolina Ziembowicz, Dariusz Jemielniak
Communication networks and group effectiveness: the case of English Wikipedia
Preprint (before peer-review)
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
A selection of intellectual goods produced by online communities - e.g. open source software or knowledge bases like Wikipedia - are in daily use by a broad audience, and thus their quality impacts the public at large. Yet, it is still unclear what contributes to the effectiveness of such online peer production systems: what conditions or social processes help them deliver quality products. Specifically, while co-contribution (i.e. bipartite networks) are often investigated in online collaboration, the role of interpersonal communication in coordination of online peer-production is much less investigated. To address this gap we have reconstructed networks of personal communication (direct messaging) between Wikipedia editors gathered in so called Wikiprojects - teams of contributors who focus on articles within specific topical areas. We found that effective projects exchange larger volume of direct messages and that their communication structure allows for complex coordination: for sharing of information locally through selective ties, and at the same time globally across the whole group. To verify how these network measures relate to the subjective perception of importance of group communication we conducted semi-structured interviews with members of selected projects. Our interviewees used direct communication for providing feedback, for maintaining close relations and for tapping on the social capital of the Wikipedia community. Our results underscore the importance of communication structure in online collaboration: online peer production communities rely on interpersonal communication to coordinate their work and to maintain high levels of engagement. Design of platforms for such communities should allow for ample group level communication as well as for one-on-one interactions.
[ { "created": "Wed, 7 Jul 2021 22:29:08 GMT", "version": "v1" } ]
2021-07-09
[ [ "Rychwalska", "Agnieszka", "" ], [ "Talaga", "Szymon", "" ], [ "Ziembowicz", "Karolina", "" ], [ "Jemielniak", "Dariusz", "" ] ]
A selection of intellectual goods produced by online communities - e.g. open source software or knowledge bases like Wikipedia - are in daily use by a broad audience, and thus their quality impacts the public at large. Yet, it is still unclear what contributes to the effectiveness of such online peer production systems: what conditions or social processes help them deliver quality products. Specifically, while co-contribution (i.e. bipartite networks) are often investigated in online collaboration, the role of interpersonal communication in coordination of online peer-production is much less investigated. To address this gap we have reconstructed networks of personal communication (direct messaging) between Wikipedia editors gathered in so called Wikiprojects - teams of contributors who focus on articles within specific topical areas. We found that effective projects exchange larger volume of direct messages and that their communication structure allows for complex coordination: for sharing of information locally through selective ties, and at the same time globally across the whole group. To verify how these network measures relate to the subjective perception of importance of group communication we conducted semi-structured interviews with members of selected projects. Our interviewees used direct communication for providing feedback, for maintaining close relations and for tapping on the social capital of the Wikipedia community. Our results underscore the importance of communication structure in online collaboration: online peer production communities rely on interpersonal communication to coordinate their work and to maintain high levels of engagement. Design of platforms for such communities should allow for ample group level communication as well as for one-on-one interactions.
2310.06904
Deepti Ghadiyaram
Piero Esposito, Parmida Atighehchian, Anastasis Germanidis and Deepti Ghadiyaram
Mitigating stereotypical biases in text to image generative systems
4 figures, 8 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art generative text-to-image models are known to exhibit social biases and over-represent certain groups like people of perceived lighter skin tones and men in their outcomes. In this work, we propose a method to mitigate such biases and ensure that the outcomes are fair across different groups of people. We do this by finetuning text-to-image models on synthetic data that varies in perceived skin tones and genders constructed from diverse text prompts. These text prompts are constructed from multiplicative combinations of ethnicities, genders, professions, age groups, and so on, resulting in diverse synthetic data. Our diversity finetuned (DFT) model improves the group fairness metric by 150% for perceived skin tone and 97.7% for perceived gender. Compared to baselines, DFT models generate more people with perceived darker skin tone and more women. To foster open research, we will release all text prompts and code to generate training images.
[ { "created": "Tue, 10 Oct 2023 18:01:52 GMT", "version": "v1" } ]
2023-10-12
[ [ "Esposito", "Piero", "" ], [ "Atighehchian", "Parmida", "" ], [ "Germanidis", "Anastasis", "" ], [ "Ghadiyaram", "Deepti", "" ] ]
State-of-the-art generative text-to-image models are known to exhibit social biases and over-represent certain groups like people of perceived lighter skin tones and men in their outcomes. In this work, we propose a method to mitigate such biases and ensure that the outcomes are fair across different groups of people. We do this by finetuning text-to-image models on synthetic data that varies in perceived skin tones and genders constructed from diverse text prompts. These text prompts are constructed from multiplicative combinations of ethnicities, genders, professions, age groups, and so on, resulting in diverse synthetic data. Our diversity finetuned (DFT) model improves the group fairness metric by 150% for perceived skin tone and 97.7% for perceived gender. Compared to baselines, DFT models generate more people with perceived darker skin tone and more women. To foster open research, we will release all text prompts and code to generate training images.
2407.11678
Luwei Sun
Luwei Sun, Dongrui Shen and Han Feng
Theoretical Insights into CycleGAN: Analyzing Approximation and Estimation Errors in Unpaired Data Generation
null
null
null
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we focus on analyzing the excess risk of the unpaired data generation model, called CycleGAN. Unlike classical GANs, CycleGAN not only transforms data between two unpaired distributions but also ensures the mappings are consistent, which is encouraged by the cycle-consistency term unique to CycleGAN. The increasing complexity of model structure and the addition of the cycle-consistency term in CycleGAN present new challenges for error analysis. By considering the impact of both the model architecture and training procedure, the risk is decomposed into two terms: approximation error and estimation error. These two error terms are analyzed separately and ultimately combined by considering the trade-off between them. Each component is rigorously analyzed; the approximation error through constructing approximations of the optimal transport maps, and the estimation error through establishing an upper bound using Rademacher complexity. Our analysis not only isolates these errors but also explores the trade-offs between them, which provides a theoretical insights of how CycleGAN's architecture and training procedures influence its performance.
[ { "created": "Tue, 16 Jul 2024 12:53:53 GMT", "version": "v1" } ]
2024-07-17
[ [ "Sun", "Luwei", "" ], [ "Shen", "Dongrui", "" ], [ "Feng", "Han", "" ] ]
In this paper, we focus on analyzing the excess risk of the unpaired data generation model, called CycleGAN. Unlike classical GANs, CycleGAN not only transforms data between two unpaired distributions but also ensures the mappings are consistent, which is encouraged by the cycle-consistency term unique to CycleGAN. The increasing complexity of model structure and the addition of the cycle-consistency term in CycleGAN present new challenges for error analysis. By considering the impact of both the model architecture and training procedure, the risk is decomposed into two terms: approximation error and estimation error. These two error terms are analyzed separately and ultimately combined by considering the trade-off between them. Each component is rigorously analyzed; the approximation error through constructing approximations of the optimal transport maps, and the estimation error through establishing an upper bound using Rademacher complexity. Our analysis not only isolates these errors but also explores the trade-offs between them, which provides a theoretical insights of how CycleGAN's architecture and training procedures influence its performance.
2204.05989
Sam Zhang
Sam Zhang, K. Hunter Wapman, Daniel B. Larremore, Aaron Clauset
Labor advantages drive the greater productivity of faculty at elite universities
22 pages, 11 figures
null
null
null
cs.DL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Faculty at prestigious institutions dominate scientific discourse, with the small proportion of researchers at elite universities producing a disproportionate share of all research publications. Environmental prestige is known to drive such epistemic disparity, but the mechanisms by which it causes increased faculty productivity remain unknown. Here we combine employment, publication, and federal survey data for 78,802 tenure-track faculty at 262 PhD-granting institutions in the American university system between 2008--2017 to show through multiple lines of evidence that the greater availability of funded graduate and postdoctoral labor at more prestigious institutions drives the environmental effect of prestige on productivity. In particular, we show that greater environmental prestige leads to larger faculty-led research groups, which drive higher faculty productivity, primarily in disciplines with research group collaboration norms. In contrast, we show that productivity does not increase substantially with prestige for either faculty papers published without group members, nor group members themselves. The disproportionate scientific productivity of elite researchers is thus largely explained by their substantial labor advantage, indicating a more limited role for prestige itself in predicting scientific contributions.
[ { "created": "Tue, 12 Apr 2022 17:55:09 GMT", "version": "v1" } ]
2022-04-13
[ [ "Zhang", "Sam", "" ], [ "Wapman", "K. Hunter", "" ], [ "Larremore", "Daniel B.", "" ], [ "Clauset", "Aaron", "" ] ]
Faculty at prestigious institutions dominate scientific discourse, with the small proportion of researchers at elite universities producing a disproportionate share of all research publications. Environmental prestige is known to drive such epistemic disparity, but the mechanisms by which it causes increased faculty productivity remain unknown. Here we combine employment, publication, and federal survey data for 78,802 tenure-track faculty at 262 PhD-granting institutions in the American university system between 2008--2017 to show through multiple lines of evidence that the greater availability of funded graduate and postdoctoral labor at more prestigious institutions drives the environmental effect of prestige on productivity. In particular, we show that greater environmental prestige leads to larger faculty-led research groups, which drive higher faculty productivity, primarily in disciplines with research group collaboration norms. In contrast, we show that productivity does not increase substantially with prestige for either faculty papers published without group members, nor group members themselves. The disproportionate scientific productivity of elite researchers is thus largely explained by their substantial labor advantage, indicating a more limited role for prestige itself in predicting scientific contributions.
2010.04466
Robert Tjarko Lange
Robert Tjarko Lange and Henning Sprekeler
Learning Not to Learn: Nature versus Nurture in Silico
null
null
null
null
cs.LG cs.AI cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals are equipped with a rich innate repertoire of sensory, behavioral and motor skills, which allows them to interact with the world immediately after birth. At the same time, many behaviors are highly adaptive and can be tailored to specific environments by means of learning. In this work, we use mathematical analysis and the framework of meta-learning (or 'learning to learn') to answer when it is beneficial to learn such an adaptive strategy and when to hard-code a heuristic behavior. We find that the interplay of ecological uncertainty, task complexity and the agents' lifetime has crucial effects on the meta-learned amortized Bayesian inference performed by an agent. There exist two regimes: One in which meta-learning yields a learning algorithm that implements task-dependent information-integration and a second regime in which meta-learning imprints a heuristic or 'hard-coded' behavior. Further analysis reveals that non-adaptive behaviors are not only optimal for aspects of the environment that are stable across individuals, but also in situations where an adaptation to the environment would in fact be highly beneficial, but could not be done quickly enough to be exploited within the remaining lifetime. Hard-coded behaviors should hence not only be those that always work, but also those that are too complex to be learned within a reasonable time frame.
[ { "created": "Fri, 9 Oct 2020 09:47:40 GMT", "version": "v1" }, { "created": "Thu, 4 Mar 2021 11:27:16 GMT", "version": "v2" }, { "created": "Sun, 1 May 2022 08:38:27 GMT", "version": "v3" } ]
2022-05-03
[ [ "Lange", "Robert Tjarko", "" ], [ "Sprekeler", "Henning", "" ] ]
Animals are equipped with a rich innate repertoire of sensory, behavioral and motor skills, which allows them to interact with the world immediately after birth. At the same time, many behaviors are highly adaptive and can be tailored to specific environments by means of learning. In this work, we use mathematical analysis and the framework of meta-learning (or 'learning to learn') to answer when it is beneficial to learn such an adaptive strategy and when to hard-code a heuristic behavior. We find that the interplay of ecological uncertainty, task complexity and the agents' lifetime has crucial effects on the meta-learned amortized Bayesian inference performed by an agent. There exist two regimes: One in which meta-learning yields a learning algorithm that implements task-dependent information-integration and a second regime in which meta-learning imprints a heuristic or 'hard-coded' behavior. Further analysis reveals that non-adaptive behaviors are not only optimal for aspects of the environment that are stable across individuals, but also in situations where an adaptation to the environment would in fact be highly beneficial, but could not be done quickly enough to be exploited within the remaining lifetime. Hard-coded behaviors should hence not only be those that always work, but also those that are too complex to be learned within a reasonable time frame.
2106.03804
Daniel Rebain
Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
Deep Medial Fields
null
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form. In this work, we introduce medial fields: a field function derived from the medial axis transform (MAT) that makes available information about the underlying 3D geometry that is immediately useful for a number of downstream tasks. In particular, the medial field encodes the local thickness of a 3D shape, and enables O(1) projection of a query point onto the medial axis. To construct the medial field we require nothing but the SDF of the shape itself, thus allowing its straightforward incorporation in any application that relies on signed distance fields. Working in unison with the O(1) surface projection supported by the SDF, the medial field opens the door for an entirely new set of efficient, shape-aware operations on implicit representations. We present three such applications, including a modification to sphere tracing that renders implicit representations with better convergence properties, a fast construction method for memory-efficient rigid-body collision proxies, and an efficient approximation of ambient occlusion that remains stable with respect to viewpoint variations.
[ { "created": "Mon, 7 Jun 2021 17:15:38 GMT", "version": "v1" } ]
2021-06-08
[ [ "Rebain", "Daniel", "" ], [ "Li", "Ke", "" ], [ "Sitzmann", "Vincent", "" ], [ "Yazdani", "Soroosh", "" ], [ "Yi", "Kwang Moo", "" ], [ "Tagliasacchi", "Andrea", "" ] ]
Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form. In this work, we introduce medial fields: a field function derived from the medial axis transform (MAT) that makes available information about the underlying 3D geometry that is immediately useful for a number of downstream tasks. In particular, the medial field encodes the local thickness of a 3D shape, and enables O(1) projection of a query point onto the medial axis. To construct the medial field we require nothing but the SDF of the shape itself, thus allowing its straightforward incorporation in any application that relies on signed distance fields. Working in unison with the O(1) surface projection supported by the SDF, the medial field opens the door for an entirely new set of efficient, shape-aware operations on implicit representations. We present three such applications, including a modification to sphere tracing that renders implicit representations with better convergence properties, a fast construction method for memory-efficient rigid-body collision proxies, and an efficient approximation of ambient occlusion that remains stable with respect to viewpoint variations.
2206.08257
Amirhossein Reisizadeh
Romain Cosson, Ali Jadbabaie, Anuran Makur, Amirhossein Reisizadeh, Devavrat Shah
Gradient Descent for Low-Rank Functions
26 pages, 2 figures
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several recent empirical studies demonstrate that important machine learning tasks, e.g., training deep neural networks, exhibit low-rank structure, where the loss function varies significantly in only a few directions of the input space. In this paper, we leverage such low-rank structure to reduce the high computational cost of canonical gradient-based methods such as gradient descent (GD). Our proposed \emph{Low-Rank Gradient Descent} (LRGD) algorithm finds an $\epsilon$-approximate stationary point of a $p$-dimensional function by first identifying $r \leq p$ significant directions, and then estimating the true $p$-dimensional gradient at every iteration by computing directional derivatives only along those $r$ directions. We establish that the "directional oracle complexities" of LRGD for strongly convex and non-convex objective functions are $\mathcal{O}(r \log(1/\epsilon) + rp)$ and $\mathcal{O}(r/\epsilon^2 + rp)$, respectively. When $r \ll p$, these complexities are smaller than the known complexities of $\mathcal{O}(p \log(1/\epsilon))$ and $\mathcal{O}(p/\epsilon^2)$ of {\gd} in the strongly convex and non-convex settings, respectively. Thus, LRGD significantly reduces the computational cost of gradient-based methods for sufficiently low-rank functions. In the course of our analysis, we also formally define and characterize the classes of exact and approximately low-rank functions.
[ { "created": "Thu, 16 Jun 2022 15:58:05 GMT", "version": "v1" } ]
2022-06-17
[ [ "Cosson", "Romain", "" ], [ "Jadbabaie", "Ali", "" ], [ "Makur", "Anuran", "" ], [ "Reisizadeh", "Amirhossein", "" ], [ "Shah", "Devavrat", "" ] ]
Several recent empirical studies demonstrate that important machine learning tasks, e.g., training deep neural networks, exhibit low-rank structure, where the loss function varies significantly in only a few directions of the input space. In this paper, we leverage such low-rank structure to reduce the high computational cost of canonical gradient-based methods such as gradient descent (GD). Our proposed \emph{Low-Rank Gradient Descent} (LRGD) algorithm finds an $\epsilon$-approximate stationary point of a $p$-dimensional function by first identifying $r \leq p$ significant directions, and then estimating the true $p$-dimensional gradient at every iteration by computing directional derivatives only along those $r$ directions. We establish that the "directional oracle complexities" of LRGD for strongly convex and non-convex objective functions are $\mathcal{O}(r \log(1/\epsilon) + rp)$ and $\mathcal{O}(r/\epsilon^2 + rp)$, respectively. When $r \ll p$, these complexities are smaller than the known complexities of $\mathcal{O}(p \log(1/\epsilon))$ and $\mathcal{O}(p/\epsilon^2)$ of {\gd} in the strongly convex and non-convex settings, respectively. Thus, LRGD significantly reduces the computational cost of gradient-based methods for sufficiently low-rank functions. In the course of our analysis, we also formally define and characterize the classes of exact and approximately low-rank functions.
1705.10664
Jiaji Zhou
Jiaji Zhou, J. Andrew Bagnell and Matthew T. Mason
A Fast Stochastic Contact Model for Planar Pushing and Grasping: Theory and Experimental Validation
Robotics: Science and Systems 2017
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the convex force-motion polynomial model for quasi-static sliding, we derive the kinematic contact model to determine the contact modes and instantaneous object motion on a supporting surface given a position controlled manipulator. The inherently stochastic object-to-surface friction distribution is modelled by sampling physically consistent parameters from appropriate distributions, with only one parameter to control the amount of noise. Thanks to the high fidelity and smoothness of convex polynomial models, the mechanics of patch contact is captured while being computationally efficient without mode selection at support points. The motion equations for both single and multiple frictional contacts are given. Simulation based on the model is validated with robotic pushing and grasping experiments.
[ { "created": "Tue, 30 May 2017 14:21:28 GMT", "version": "v1" } ]
2017-05-31
[ [ "Zhou", "Jiaji", "" ], [ "Bagnell", "J. Andrew", "" ], [ "Mason", "Matthew T.", "" ] ]
Based on the convex force-motion polynomial model for quasi-static sliding, we derive the kinematic contact model to determine the contact modes and instantaneous object motion on a supporting surface given a position controlled manipulator. The inherently stochastic object-to-surface friction distribution is modelled by sampling physically consistent parameters from appropriate distributions, with only one parameter to control the amount of noise. Thanks to the high fidelity and smoothness of convex polynomial models, the mechanics of patch contact is captured while being computationally efficient without mode selection at support points. The motion equations for both single and multiple frictional contacts are given. Simulation based on the model is validated with robotic pushing and grasping experiments.
2008.04717
Lukas Holzbaur
Lukas Holzbaur, Rina Polyanskaya, Nikita Polyanskii, Ilya Vorobyev and Eitan Yaakobi
Lifted Multiplicity Codes
null
null
null
null
cs.IT cs.DM math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lifted Reed-Solomon codes and multiplicity codes are two classes of evaluation codes that allow for the design of high-rate codes that can recover every codeword or information symbol from many disjoint sets. Recently, the underlying approaches have been combined to construct lifted bi-variate multiplicity codes, that can further improve on the rate. We continue the study of these codes by providing lower bounds on the rate and distance for lifted multiplicity codes obtained from polynomials in an arbitrary number of variables. Specifically, we investigate a subcode of a lifted multiplicity code formed by the linear span of $m$-variate monomials whose restriction to an arbitrary line in $\mathbb{F}_q^m$ is equivalent to a low-degree uni-variate polynomial. We find the tight asymptotic behavior of the fraction of such monomials when the number of variables $m$ is fixed and the alphabet size $q=2^\ell$ is large. For some parameter regimes, lifted multiplicity codes are then shown to have a better trade-off between redundancy and the number of disjoint recovering sets for every codeword or information symbol than previously known constructions. Additionally, we present a local self-correction algorithm for lifted multiplicity codes.
[ { "created": "Tue, 11 Aug 2020 14:24:52 GMT", "version": "v1" }, { "created": "Thu, 29 Oct 2020 14:41:53 GMT", "version": "v2" } ]
2020-10-30
[ [ "Holzbaur", "Lukas", "" ], [ "Polyanskaya", "Rina", "" ], [ "Polyanskii", "Nikita", "" ], [ "Vorobyev", "Ilya", "" ], [ "Yaakobi", "Eitan", "" ] ]
Lifted Reed-Solomon codes and multiplicity codes are two classes of evaluation codes that allow for the design of high-rate codes that can recover every codeword or information symbol from many disjoint sets. Recently, the underlying approaches have been combined to construct lifted bi-variate multiplicity codes, that can further improve on the rate. We continue the study of these codes by providing lower bounds on the rate and distance for lifted multiplicity codes obtained from polynomials in an arbitrary number of variables. Specifically, we investigate a subcode of a lifted multiplicity code formed by the linear span of $m$-variate monomials whose restriction to an arbitrary line in $\mathbb{F}_q^m$ is equivalent to a low-degree uni-variate polynomial. We find the tight asymptotic behavior of the fraction of such monomials when the number of variables $m$ is fixed and the alphabet size $q=2^\ell$ is large. For some parameter regimes, lifted multiplicity codes are then shown to have a better trade-off between redundancy and the number of disjoint recovering sets for every codeword or information symbol than previously known constructions. Additionally, we present a local self-correction algorithm for lifted multiplicity codes.
1909.05663
Riccardo La Grassa
Ignazio Gallo, Shah Nawaz, Alessandro Calefati, Riccardo La Grassa, Nicola Landro
Picture What you Read
7 pages, Dicta2019 conference
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Visualization refers to our ability to create an image in our head based on the text we read or the words we hear. It is one of the many skills that makes reading comprehension possible. Convolutional Neural Networks (CNN) are an excellent tool for recognizing and classifying text documents. In addition, it can generate images conditioned on natural language. In this work, we utilize CNNs capabilities to generate realistic images representative of the text illustrating the semantic concept. We conducted various experiments to highlight the capacity of the proposed model to generate representative images of the text descriptions used as input to the proposed model.
[ { "created": "Mon, 9 Sep 2019 11:26:35 GMT", "version": "v1" } ]
2019-09-13
[ [ "Gallo", "Ignazio", "" ], [ "Nawaz", "Shah", "" ], [ "Calefati", "Alessandro", "" ], [ "La Grassa", "Riccardo", "" ], [ "Landro", "Nicola", "" ] ]
Visualization refers to our ability to create an image in our head based on the text we read or the words we hear. It is one of the many skills that makes reading comprehension possible. Convolutional Neural Networks (CNN) are an excellent tool for recognizing and classifying text documents. In addition, it can generate images conditioned on natural language. In this work, we utilize CNNs capabilities to generate realistic images representative of the text illustrating the semantic concept. We conducted various experiments to highlight the capacity of the proposed model to generate representative images of the text descriptions used as input to the proposed model.
1606.05830
Cesar Cadena
Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, Jose Neira, Ian Reid, John J. Leonard
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
null
IEEE Transactions on Robotics 32 (6) pp 1309-1332, 2016
10.1109/TRO.2016.2624754
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?
[ { "created": "Sun, 19 Jun 2016 03:23:53 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2016 06:32:40 GMT", "version": "v2" }, { "created": "Wed, 23 Nov 2016 15:07:09 GMT", "version": "v3" }, { "created": "Mon, 30 Jan 2017 12:05:48 GMT", "version": "v4" } ]
2017-01-31
[ [ "Cadena", "Cesar", "" ], [ "Carlone", "Luca", "" ], [ "Carrillo", "Henry", "" ], [ "Latif", "Yasir", "" ], [ "Scaramuzza", "Davide", "" ], [ "Neira", "Jose", "" ], [ "Reid", "Ian", "" ], [ "Leonard", "John J.", "" ] ]
Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?
1802.07440
Telikepalli Kavitha
Telikepalli Kavitha
Max-size popular matchings and extensions
26 pages, 10 figures
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the max-size popular matching problem in a roommates instance G = (V,E) with strict preference lists. A matching M is popular if there is no matching M' in G such that the vertices that prefer M' to M outnumber those that prefer M to M'. We show it is NP-hard to compute a max-size popular matching in G. This is in contrast to the tractability of this problem in bipartite graphs where a max-size popular matching can be computed in linear time. We define a subclass of max-size popular matchings called strongly dominant matchings and show a linear time algorithm to solve the strongly dominant matching problem in a roommates instance. We consider a generalization of the max-size popular matching problem in bipartite graphs: this is the max-weight popular matching problem where there is also an edge weight function w and we seek a popular matching of largest weight. We show this is an NP-hard problem and this is so even when w(e) is either 1 or 2 for every edge e. We also show an algorithm with running time O*(2^{n/4}) to find a max-weight popular matching matching in G = (A U B,E)$ on n vertices.
[ { "created": "Wed, 21 Feb 2018 06:43:21 GMT", "version": "v1" } ]
2018-02-22
[ [ "Kavitha", "Telikepalli", "" ] ]
We consider the max-size popular matching problem in a roommates instance G = (V,E) with strict preference lists. A matching M is popular if there is no matching M' in G such that the vertices that prefer M' to M outnumber those that prefer M to M'. We show it is NP-hard to compute a max-size popular matching in G. This is in contrast to the tractability of this problem in bipartite graphs where a max-size popular matching can be computed in linear time. We define a subclass of max-size popular matchings called strongly dominant matchings and show a linear time algorithm to solve the strongly dominant matching problem in a roommates instance. We consider a generalization of the max-size popular matching problem in bipartite graphs: this is the max-weight popular matching problem where there is also an edge weight function w and we seek a popular matching of largest weight. We show this is an NP-hard problem and this is so even when w(e) is either 1 or 2 for every edge e. We also show an algorithm with running time O*(2^{n/4}) to find a max-weight popular matching matching in G = (A U B,E)$ on n vertices.
cs/0608035
Kohei Suenaga
Naoki Kobayashi, Kohei Suenaga, and Lucian Wischik
Resource Usage Analysis for the Pi-Calculus
null
Logical Methods in Computer Science, Volume 2, Issue 3 (September 13, 2006) lmcs:2246
10.2168/LMCS-2(3:4)2006
null
cs.PL cs.LO
null
We propose a type-based resource usage analysis for the π-calculus extended with resource creation/access primitives. The goal of the resource usage analysis is to statically check that a program accesses resources such as files and memory in a valid manner. Our type system is an extension of previous behavioral type systems for the π-calculus, and can guarantee the safety property that no invalid access is performed, as well as the property that necessary accesses (such as the close operation for a file) are eventually performed unless the program diverges. A sound type inference algorithm for the type system is also developed to free the programmer from the burden of writing complex type annotations. Based on the algorithm, we have implemented a prototype resource usage analyzer for the π-calculus. To the authors' knowledge, ours is the first type-based resource usage analysis that deals with an expressive concurrent language like the pi-calculus.
[ { "created": "Mon, 7 Aug 2006 05:22:42 GMT", "version": "v1" }, { "created": "Wed, 13 Sep 2006 11:05:09 GMT", "version": "v2" } ]
2017-01-11
[ [ "Kobayashi", "Naoki", "" ], [ "Suenaga", "Kohei", "" ], [ "Wischik", "Lucian", "" ] ]
We propose a type-based resource usage analysis for the π-calculus extended with resource creation/access primitives. The goal of the resource usage analysis is to statically check that a program accesses resources such as files and memory in a valid manner. Our type system is an extension of previous behavioral type systems for the π-calculus, and can guarantee the safety property that no invalid access is performed, as well as the property that necessary accesses (such as the close operation for a file) are eventually performed unless the program diverges. A sound type inference algorithm for the type system is also developed to free the programmer from the burden of writing complex type annotations. Based on the algorithm, we have implemented a prototype resource usage analyzer for the π-calculus. To the authors' knowledge, ours is the first type-based resource usage analysis that deals with an expressive concurrent language like the pi-calculus.
2107.09283
Hyunji Chung
Jungheum Park, Hyunji Chung
Toward Trustworthy Urban IT Systems: The Bright and Dark Sides of Smart City Development
1 figure
null
null
null
cs.CY
http://creativecommons.org/publicdomain/zero/1.0/
In smart cities built on information and communication technology, citizens and various IT systems interoperate in harmony. Cloud computing and Internet-of-Things technologies that have been developed for a long time are making modern cities smarter. Smart cities can have a positive impact on citizens, but they can also make cities dangerous. Today, with the emerging reality of smart cities, this paper looks at both the bright and dark sides and provides a foundation for supporting work-related tasks of IT professionals as well as non-IT experts involved in urban design and development.
[ { "created": "Tue, 20 Jul 2021 06:54:08 GMT", "version": "v1" } ]
2021-07-21
[ [ "Park", "Jungheum", "" ], [ "Chung", "Hyunji", "" ] ]
In smart cities built on information and communication technology, citizens and various IT systems interoperate in harmony. Cloud computing and Internet-of-Things technologies that have been developed for a long time are making modern cities smarter. Smart cities can have a positive impact on citizens, but they can also make cities dangerous. Today, with the emerging reality of smart cities, this paper looks at both the bright and dark sides and provides a foundation for supporting work-related tasks of IT professionals as well as non-IT experts involved in urban design and development.
1612.09134
Antonio Manuel Lopez Pe\~na
Antonio M. Lopez, Jiaolong Xu, Jose L. Gomez, David Vazquez, German Ros
From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example
Invited book chapter to appear in "Domain Adaptation in Computer Vision Applications", Springer Series: Advances in Computer Vision and Pattern Recognition, Edited by Gabriela Csurka
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
[ { "created": "Thu, 29 Dec 2016 13:16:22 GMT", "version": "v1" } ]
2016-12-30
[ [ "Lopez", "Antonio M.", "" ], [ "Xu", "Jiaolong", "" ], [ "Gomez", "Jose L.", "" ], [ "Vazquez", "David", "" ], [ "Ros", "German", "" ] ]
Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.