id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1201.2905
Qiyang Zhao
Zhao Qiyang
NegCut: Automatic Image Segmentation based on MRF-MAP
Since it's an unlucky failure about length-limit violation, I'd like to save it on arXiv as a record. Any suggestions are welcome
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Solving the Maximum a Posteriori on Markov Random Field, MRF-MAP, is a prevailing method in recent interactive image segmentation tools. Although mathematically explicit in its computational targets, and impressive for the segmentation quality, MRF-MAP is hard to accomplish without the interactive information from users. So it is rarely adopted in the automatic style up to today. In this paper, we present an automatic image segmentation algorithm, NegCut, based on the approximation to MRF-MAP. First we prove MRF-MAP is NP-hard when the probabilistic models are unknown, and then present an approximation function in the form of minimum cuts on graphs with negative weights. Finally, the binary segmentation is taken from the largest eigenvector of the target matrix, with a tuned version of the Lanczos eigensolver. It is shown competitive at the segmentation quality in our experiments.
[ { "created": "Fri, 13 Jan 2012 18:18:03 GMT", "version": "v1" }, { "created": "Mon, 16 Jan 2012 03:28:43 GMT", "version": "v2" } ]
2012-01-17
[ [ "Qiyang", "Zhao", "" ] ]
Solving the Maximum a Posteriori on Markov Random Field, MRF-MAP, is a prevailing method in recent interactive image segmentation tools. Although mathematically explicit in its computational targets, and impressive for the segmentation quality, MRF-MAP is hard to accomplish without the interactive information from users. So it is rarely adopted in the automatic style up to today. In this paper, we present an automatic image segmentation algorithm, NegCut, based on the approximation to MRF-MAP. First we prove MRF-MAP is NP-hard when the probabilistic models are unknown, and then present an approximation function in the form of minimum cuts on graphs with negative weights. Finally, the binary segmentation is taken from the largest eigenvector of the target matrix, with a tuned version of the Lanczos eigensolver. It is shown competitive at the segmentation quality in our experiments.
1804.01918
Alessandro Gabbana
Enrico Calore, Alessandro Gabbana, Sebastiano Fabio Schifano, Raffaele Tripiccione
Early Experience on Using Knights Landing Processors for Lattice Boltzmann Applications
null
null
10.1007/978-3-319-78024-5_45
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Knights Landing (KNL) is the codename for the latest generation of Intel processors based on Intel Many Integrated Core (MIC) architecture. It relies on massive thread and data parallelism, and fast on-chip memory. This processor operates in standalone mode, booting an off-the-shelf Linux operating system. The KNL peak performance is very high - approximately 3 Tflops in double precision and 6 Tflops in single precision - but sustained performance depends critically on how well all parallel features of the processor are exploited by real-life applications. We assess the performance of this processor for Lattice Boltzmann codes, widely used in computational fluid-dynamics. In our OpenMP code we consider several memory data-layouts that meet the conflicting computing requirements of distinct parts of the application, and sustain a large fraction of peak performance. We make some performance comparisons with other processors and accelerators, and also discuss the impact of the various memory layouts on energy efficiency.
[ { "created": "Thu, 5 Apr 2018 15:47:04 GMT", "version": "v1" } ]
2018-04-06
[ [ "Calore", "Enrico", "" ], [ "Gabbana", "Alessandro", "" ], [ "Schifano", "Sebastiano Fabio", "" ], [ "Tripiccione", "Raffaele", "" ] ]
The Knights Landing (KNL) is the codename for the latest generation of Intel processors based on Intel Many Integrated Core (MIC) architecture. It relies on massive thread and data parallelism, and fast on-chip memory. This processor operates in standalone mode, booting an off-the-shelf Linux operating system. The KNL peak performance is very high - approximately 3 Tflops in double precision and 6 Tflops in single precision - but sustained performance depends critically on how well all parallel features of the processor are exploited by real-life applications. We assess the performance of this processor for Lattice Boltzmann codes, widely used in computational fluid-dynamics. In our OpenMP code we consider several memory data-layouts that meet the conflicting computing requirements of distinct parts of the application, and sustain a large fraction of peak performance. We make some performance comparisons with other processors and accelerators, and also discuss the impact of the various memory layouts on energy efficiency.
1706.03158
Tryphon Georgiou
Zahra Askarzadeh, Rui Fu, Abhishek Halder, Yongxin Chen, and Tryphon T. Georgiou
Stability Theory of Stochastic Models in Opinion Dynamics
11 pages, 6 figures
null
null
null
cs.SY cs.SI math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a certain class of nonlinear maps that preserve the probability simplex, i.e., stochastic maps, that are inspired by the DeGroot-Friedkin model of belief/opinion propagation over influence networks. The corresponding dynamical models describe the evolution of the probability distribution of interacting species. Such models where the probability transition mechanism depends nonlinearly on the current state are often referred to as {\em nonlinear Markov chains}. In this paper we develop stability results and study the behavior of representative opinion models. The stability certificates are based on the contractivity of the nonlinear evolution in the $\ell_1$-metric. We apply the theory to two types of opinion models where the adaptation of the transition probabilities to the current state is exponential and linear, respectively--both of these can display a wide range of behaviors. We discuss continuous-time and other generalizations.
[ { "created": "Sat, 10 Jun 2017 00:27:38 GMT", "version": "v1" }, { "created": "Wed, 10 Oct 2018 04:34:49 GMT", "version": "v2" } ]
2018-10-11
[ [ "Askarzadeh", "Zahra", "" ], [ "Fu", "Rui", "" ], [ "Halder", "Abhishek", "" ], [ "Chen", "Yongxin", "" ], [ "Georgiou", "Tryphon T.", "" ] ]
We consider a certain class of nonlinear maps that preserve the probability simplex, i.e., stochastic maps, that are inspired by the DeGroot-Friedkin model of belief/opinion propagation over influence networks. The corresponding dynamical models describe the evolution of the probability distribution of interacting species. Such models where the probability transition mechanism depends nonlinearly on the current state are often referred to as {\em nonlinear Markov chains}. In this paper we develop stability results and study the behavior of representative opinion models. The stability certificates are based on the contractivity of the nonlinear evolution in the $\ell_1$-metric. We apply the theory to two types of opinion models where the adaptation of the transition probabilities to the current state is exponential and linear, respectively--both of these can display a wide range of behaviors. We discuss continuous-time and other generalizations.
2407.20068
Yuhan Liu
Yuhan Liu, Sheng Wang, Yixuan Liu, Feifei Li, Hong Chen
Unleash the Power of Ellipsis: Accuracy-enhanced Sparse Vector Technique with Exponential Noise
null
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Sparse Vector Technique (SVT) is one of the most fundamental tools in differential privacy (DP). It works as a backbone for adaptive data analysis by answering a sequence of queries on a given dataset, and gleaning useful information in a privacy-preserving manner. Unlike the typical private query releases that directly publicize the noisy query results, SVT is less informative -- it keeps the noisy query results to itself and only reveals a binary bit for each query, indicating whether the query result surpasses a predefined threshold. To provide a rigorous DP guarantee for SVT, prior works in the literature adopt a conservative privacy analysis by assuming the direct disclosure of noisy query results as in typical private query releases. This approach, however, hinders SVT from achieving higher query accuracy due to an overestimation of the privacy risks, which further leads to an excessive noise injection using the Laplacian or Gaussian noise for perturbation. Motivated by this, we provide a new privacy analysis for SVT by considering its less informative nature. Our analysis results not only broaden the range of applicable noise types for perturbation in SVT, but also identify the exponential noise as optimal among all evaluated noises (which, however, is usually deemed non-applicable in prior works). The main challenge in applying exponential noise to SVT is mitigating the sub-optimal performance due to the bias introduced by noise distributions. To address this, we develop a utility-oriented optimal threshold correction method and an appending strategy, which enhances the performance of SVT by increasing the precision and recall, respectively. The effectiveness of our proposed methods is substantiated both theoretically and empirically, demonstrating significant improvements up to $50\%$ across evaluated metrics.
[ { "created": "Mon, 29 Jul 2024 14:54:28 GMT", "version": "v1" } ]
2024-07-30
[ [ "Liu", "Yuhan", "" ], [ "Wang", "Sheng", "" ], [ "Liu", "Yixuan", "" ], [ "Li", "Feifei", "" ], [ "Chen", "Hong", "" ] ]
The Sparse Vector Technique (SVT) is one of the most fundamental tools in differential privacy (DP). It works as a backbone for adaptive data analysis by answering a sequence of queries on a given dataset, and gleaning useful information in a privacy-preserving manner. Unlike the typical private query releases that directly publicize the noisy query results, SVT is less informative -- it keeps the noisy query results to itself and only reveals a binary bit for each query, indicating whether the query result surpasses a predefined threshold. To provide a rigorous DP guarantee for SVT, prior works in the literature adopt a conservative privacy analysis by assuming the direct disclosure of noisy query results as in typical private query releases. This approach, however, hinders SVT from achieving higher query accuracy due to an overestimation of the privacy risks, which further leads to an excessive noise injection using the Laplacian or Gaussian noise for perturbation. Motivated by this, we provide a new privacy analysis for SVT by considering its less informative nature. Our analysis results not only broaden the range of applicable noise types for perturbation in SVT, but also identify the exponential noise as optimal among all evaluated noises (which, however, is usually deemed non-applicable in prior works). The main challenge in applying exponential noise to SVT is mitigating the sub-optimal performance due to the bias introduced by noise distributions. To address this, we develop a utility-oriented optimal threshold correction method and an appending strategy, which enhances the performance of SVT by increasing the precision and recall, respectively. The effectiveness of our proposed methods is substantiated both theoretically and empirically, demonstrating significant improvements up to $50\%$ across evaluated metrics.
2008.00023
Gregory D. Hager
Gregory D. Hager, Mark D. Hill, and Katherine Yelick
Opportunities and Challenges for Next Generation Computing
A Computing Community Consortium (CCC) white paper, 7 pages
null
null
null
cs.CY cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computing has dramatically changed nearly every aspect of our lives, from business and agriculture to communication and entertainment. As a nation, we rely on computing in the design of systems for energy, transportation and defense; and computing fuels scientific discoveries that will improve our fundamental understanding of the world and help develop solutions to major challenges in health and the environment. Computing has changed our world, in part, because our innovations can run on computers whose performance and cost-performance has improved a million-fold over the last few decades. A driving force behind this has been a repeated doubling of the transistors per chip, dubbed Moore's Law. A concomitant enabler has been Dennard Scaling that has permitted these performance doublings at roughly constant power, but, as we will see, both trends face challenges. Consider for a moment the impact of these two trends over the past 30 years. A 1980's supercomputer (e.g. a Cray 2) was rated at nearly 2 Gflops and consumed nearly 200 KW of power. At the time, it was used for high performance and national-scale applications ranging from weather forecasting to nuclear weapons research. A computer of similar performance now fits in our pocket and consumes less than 10 watts. What would be the implications of a similar computing/power reduction over the next 30 years - that is, taking a petaflop-scale machine (e.g. the Cray XK7 which requires about 500 KW for 1 Pflop (=1015 operations/sec) performance) and repeating that process? What is possible with such a computer in your pocket? How would it change the landscape of high capacity computing? In the remainder of this paper, we articulate some opportunities and challenges for dramatic performance improvements of both personal to national scale computing, and discuss some "out of the box" possibilities for achieving computing at this scale.
[ { "created": "Fri, 31 Jul 2020 18:16:49 GMT", "version": "v1" } ]
2020-08-04
[ [ "Hager", "Gregory D.", "" ], [ "Hill", "Mark D.", "" ], [ "Yelick", "Katherine", "" ] ]
Computing has dramatically changed nearly every aspect of our lives, from business and agriculture to communication and entertainment. As a nation, we rely on computing in the design of systems for energy, transportation and defense; and computing fuels scientific discoveries that will improve our fundamental understanding of the world and help develop solutions to major challenges in health and the environment. Computing has changed our world, in part, because our innovations can run on computers whose performance and cost-performance has improved a million-fold over the last few decades. A driving force behind this has been a repeated doubling of the transistors per chip, dubbed Moore's Law. A concomitant enabler has been Dennard Scaling that has permitted these performance doublings at roughly constant power, but, as we will see, both trends face challenges. Consider for a moment the impact of these two trends over the past 30 years. A 1980's supercomputer (e.g. a Cray 2) was rated at nearly 2 Gflops and consumed nearly 200 KW of power. At the time, it was used for high performance and national-scale applications ranging from weather forecasting to nuclear weapons research. A computer of similar performance now fits in our pocket and consumes less than 10 watts. What would be the implications of a similar computing/power reduction over the next 30 years - that is, taking a petaflop-scale machine (e.g. the Cray XK7 which requires about 500 KW for 1 Pflop (=1015 operations/sec) performance) and repeating that process? What is possible with such a computer in your pocket? How would it change the landscape of high capacity computing? In the remainder of this paper, we articulate some opportunities and challenges for dramatic performance improvements of both personal to national scale computing, and discuss some "out of the box" possibilities for achieving computing at this scale.
1602.07732
Mattia Rebato
Mattia Rebato and Marco Mezzavilla and Sundeep Rangan and Michele Zorzi
The Potential of Resource Sharing in 5G Millimeter-Wave Bands
null
null
null
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the severe spectrum shortage in conventional cellular bands, the millimeter (mmWave) frequencies, roughly above 10~GHz, have been attracting growing attention for next-generation micro- and pico- cellular wireless networks. A fundamental and open question is how these bands should be used by cellular operators. Cellular spectrum has been traditionally allocated following an exclusive ownership model. However, in this paper we argue that the distinct nature of mmWave communication -- the massive bandwidth degrees of freedom, directional isolation and high susceptibility to blockage -- suggest that spectrum and infrastructure sharing between multiple operators may be necessary to exploit the full potential of these bands. High-level capacity analyses are presented that reveal significant possible gains under spectrum and infrastructure sharing, even under minimal coordination between operators. Moreover, we discuss how network technologies including software defined networks (SDNs) and network function virtualization (NFV) can easily enable resource sharing by having a programmable core entity provide transparent inter-operator access to the end user.
[ { "created": "Wed, 24 Feb 2016 22:22:00 GMT", "version": "v1" } ]
2016-02-26
[ [ "Rebato", "Mattia", "" ], [ "Mezzavilla", "Marco", "" ], [ "Rangan", "Sundeep", "" ], [ "Zorzi", "Michele", "" ] ]
With the severe spectrum shortage in conventional cellular bands, the millimeter (mmWave) frequencies, roughly above 10~GHz, have been attracting growing attention for next-generation micro- and pico- cellular wireless networks. A fundamental and open question is how these bands should be used by cellular operators. Cellular spectrum has been traditionally allocated following an exclusive ownership model. However, in this paper we argue that the distinct nature of mmWave communication -- the massive bandwidth degrees of freedom, directional isolation and high susceptibility to blockage -- suggest that spectrum and infrastructure sharing between multiple operators may be necessary to exploit the full potential of these bands. High-level capacity analyses are presented that reveal significant possible gains under spectrum and infrastructure sharing, even under minimal coordination between operators. Moreover, we discuss how network technologies including software defined networks (SDNs) and network function virtualization (NFV) can easily enable resource sharing by having a programmable core entity provide transparent inter-operator access to the end user.
2308.07876
Yibo Hu
Yibo Hu, Erick Skorupa Parolin, Latifur Khan, Patrick T. Brandt, Javier Osorio, Vito J. D'Orazio
Leveraging Codebook Knowledge with NLI and ChatGPT for Zero-Shot Political Relation Classification
ACL 2024
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language inference (NLI)-based model called ZSP. ChatGPT uses codebook's labeled summaries as prompts, whereas ZSP breaks down the classification task into context, event mode, and class disambiguation to refine task-specific hypotheses. This decomposition enhances interpretability, efficiency, and adaptability to schema changes. The experiments reveal ChatGPT's strengths and limitations, and crucially show ZSP's outperformance of dictionary-based methods and its competitive edge over some supervised models. These findings affirm the value of ZSP for validating event records and advancing ontology development. Our study underscores the efficacy of leveraging transfer learning and existing domain expertise to enhance research efficiency and scalability.
[ { "created": "Tue, 15 Aug 2023 16:41:53 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 13:23:08 GMT", "version": "v2" }, { "created": "Thu, 6 Jun 2024 14:46:44 GMT", "version": "v3" } ]
2024-06-07
[ [ "Hu", "Yibo", "" ], [ "Parolin", "Erick Skorupa", "" ], [ "Khan", "Latifur", "" ], [ "Brandt", "Patrick T.", "" ], [ "Osorio", "Javier", "" ], [ "D'Orazio", "Vito J.", "" ] ]
Is it possible accurately classify political relations within evolving event ontologies without extensive annotations? This study investigates zero-shot learning methods that use expert knowledge from existing annotation codebook, and evaluates the performance of advanced ChatGPT (GPT-3.5/4) and a natural language inference (NLI)-based model called ZSP. ChatGPT uses codebook's labeled summaries as prompts, whereas ZSP breaks down the classification task into context, event mode, and class disambiguation to refine task-specific hypotheses. This decomposition enhances interpretability, efficiency, and adaptability to schema changes. The experiments reveal ChatGPT's strengths and limitations, and crucially show ZSP's outperformance of dictionary-based methods and its competitive edge over some supervised models. These findings affirm the value of ZSP for validating event records and advancing ontology development. Our study underscores the efficacy of leveraging transfer learning and existing domain expertise to enhance research efficiency and scalability.
2306.03514
Xinyu Huang
Youcai Zhang, Xinyu Huang, Jinyu Ma, Zhaoyang Li, Zhaochuan Luo, Yanchun Xie, Yuzhuo Qin, Tong Luo, Yaqian Li, Shilong Liu, Yandong Guo, Lei Zhang
Recognize Anything: A Strong Image Tagging Model
Homepage: https://recognize-anything.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the Recognize Anything Model (RAM): a strong foundation model for image tagging. RAM makes a substantial step for large models in computer vision, demonstrating the zero-shot ability to recognize any common category with high accuracy. RAM introduces a new paradigm for image tagging, leveraging large-scale image-text pairs for training instead of manual annotations. The development of RAM comprises four key steps. Firstly, annotation-free image tags are obtained at scale through automatic text semantic parsing. Subsequently, a preliminary model is trained for automatic annotation by unifying the caption and tagging tasks, supervised by the original texts and parsed tags, respectively. Thirdly, a data engine is employed to generate additional annotations and clean incorrect ones. Lastly, the model is retrained with the processed data and fine-tuned using a smaller but higher-quality dataset. We evaluate the tagging capabilities of RAM on numerous benchmarks and observe impressive zero-shot performance, significantly outperforming CLIP and BLIP. Remarkably, RAM even surpasses the fully supervised manners and exhibits competitive performance with the Google tagging API. We are releasing the RAM at \url{https://recognize-anything.github.io/} to foster the advancements of large models in computer vision.
[ { "created": "Tue, 6 Jun 2023 09:00:10 GMT", "version": "v1" }, { "created": "Wed, 7 Jun 2023 04:24:55 GMT", "version": "v2" }, { "created": "Fri, 9 Jun 2023 15:21:06 GMT", "version": "v3" } ]
2023-06-12
[ [ "Zhang", "Youcai", "" ], [ "Huang", "Xinyu", "" ], [ "Ma", "Jinyu", "" ], [ "Li", "Zhaoyang", "" ], [ "Luo", "Zhaochuan", "" ], [ "Xie", "Yanchun", "" ], [ "Qin", "Yuzhuo", "" ], [ "Luo", "Tong", "" ], [ "Li", "Yaqian", "" ], [ "Liu", "Shilong", "" ], [ "Guo", "Yandong", "" ], [ "Zhang", "Lei", "" ] ]
We present the Recognize Anything Model (RAM): a strong foundation model for image tagging. RAM makes a substantial step for large models in computer vision, demonstrating the zero-shot ability to recognize any common category with high accuracy. RAM introduces a new paradigm for image tagging, leveraging large-scale image-text pairs for training instead of manual annotations. The development of RAM comprises four key steps. Firstly, annotation-free image tags are obtained at scale through automatic text semantic parsing. Subsequently, a preliminary model is trained for automatic annotation by unifying the caption and tagging tasks, supervised by the original texts and parsed tags, respectively. Thirdly, a data engine is employed to generate additional annotations and clean incorrect ones. Lastly, the model is retrained with the processed data and fine-tuned using a smaller but higher-quality dataset. We evaluate the tagging capabilities of RAM on numerous benchmarks and observe impressive zero-shot performance, significantly outperforming CLIP and BLIP. Remarkably, RAM even surpasses the fully supervised manners and exhibits competitive performance with the Google tagging API. We are releasing the RAM at \url{https://recognize-anything.github.io/} to foster the advancements of large models in computer vision.
2112.14825
Yaroslav Golubev
Sergey Titov, Yaroslav Golubev, Timofey Bryksin
ReSplit: Improving the Structure of Jupyter Notebooks by Re-Splitting Their Cells
5 pages, 2 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jupyter notebooks represent a unique format for programming - a combination of code and Markdown with rich formatting, separated into individual cells. We propose to perceive a Jupyter Notebook cell as a simplified and raw version of a programming function. Similar to functions, Jupyter cells should strive to contain singular, self-contained actions. At the same time, research shows that real-world notebooks fail to do so and suffer from the lack of proper structure. To combat this, we propose ReSplit, an algorithm for an automatic re-splitting of cells in Jupyter notebooks. The algorithm analyzes definition-usage chains in the notebook and consists of two parts - merging and splitting the cells. We ran the algorithm on a large corpus of notebooks to evaluate its performance and its overall effect on notebooks, and evaluated it by human experts: we showed them several notebooks in their original and the re-split form. In 29.5% of cases, the re-split notebook was selected as the preferred way of perceiving the code. We analyze what influenced this decision and describe several individual cases in detail.
[ { "created": "Wed, 29 Dec 2021 21:15:30 GMT", "version": "v1" } ]
2022-01-03
[ [ "Titov", "Sergey", "" ], [ "Golubev", "Yaroslav", "" ], [ "Bryksin", "Timofey", "" ] ]
Jupyter notebooks represent a unique format for programming - a combination of code and Markdown with rich formatting, separated into individual cells. We propose to perceive a Jupyter Notebook cell as a simplified and raw version of a programming function. Similar to functions, Jupyter cells should strive to contain singular, self-contained actions. At the same time, research shows that real-world notebooks fail to do so and suffer from the lack of proper structure. To combat this, we propose ReSplit, an algorithm for an automatic re-splitting of cells in Jupyter notebooks. The algorithm analyzes definition-usage chains in the notebook and consists of two parts - merging and splitting the cells. We ran the algorithm on a large corpus of notebooks to evaluate its performance and its overall effect on notebooks, and evaluated it by human experts: we showed them several notebooks in their original and the re-split form. In 29.5% of cases, the re-split notebook was selected as the preferred way of perceiving the code. We analyze what influenced this decision and describe several individual cases in detail.
1505.02405
Vasco Manquinho
Miguel Neves and Ruben Martins and Mikol\'a\v{s} Janota and In\^es Lynce and Vasco Manquinho
Exploiting Resolution-based Representations for MaxSAT Solving
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most recent MaxSAT algorithms rely on a succession of calls to a SAT solver in order to find an optimal solution. In particular, several algorithms take advantage of the ability of SAT solvers to identify unsatisfiable subformulas. Usually, these MaxSAT algorithms perform better when small unsatisfiable subformulas are found early. However, this is not the case in many problem instances, since the whole formula is given to the SAT solver in each call. In this paper, we propose to partition the MaxSAT formula using a resolution-based graph representation. Partitions are then iteratively joined by using a proximity measure extracted from the graph representation of the formula. The algorithm ends when only one partition remains and the optimal solution is found. Experimental results show that this new approach further enhances a state of the art MaxSAT solver to optimally solve a larger set of industrial problem instances.
[ { "created": "Sun, 10 May 2015 16:38:15 GMT", "version": "v1" } ]
2015-05-12
[ [ "Neves", "Miguel", "" ], [ "Martins", "Ruben", "" ], [ "Janota", "Mikoláš", "" ], [ "Lynce", "Inês", "" ], [ "Manquinho", "Vasco", "" ] ]
Most recent MaxSAT algorithms rely on a succession of calls to a SAT solver in order to find an optimal solution. In particular, several algorithms take advantage of the ability of SAT solvers to identify unsatisfiable subformulas. Usually, these MaxSAT algorithms perform better when small unsatisfiable subformulas are found early. However, this is not the case in many problem instances, since the whole formula is given to the SAT solver in each call. In this paper, we propose to partition the MaxSAT formula using a resolution-based graph representation. Partitions are then iteratively joined by using a proximity measure extracted from the graph representation of the formula. The algorithm ends when only one partition remains and the optimal solution is found. Experimental results show that this new approach further enhances a state of the art MaxSAT solver to optimally solve a larger set of industrial problem instances.
1904.05396
Yuta Nakahara
Yuta Nakahara, Toshiyasu Matsushima
Covariance Evolution for Spatially "Mt. Fuji" Coupled LDPC Codes
accepted to IEEE Information Theory Workshop (ITW) 2019
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A spatially "Mt. Fuji" coupled low-density parity check (LDPC) ensemble is a modified version of the original spatially coupled (SC) LDPC ensemble. Its desirable properties are first observed in experimentally. The decoding error probability in the error floor region over the binary erasure channel (BEC) is theoretically analyzed later. In this paper, as the last piece of the theoretical analysis over the BEC, we analyze the decoding error probability in the waterfall region by modifying the covariance evolution which has been used to analyze the original SC-LDPC ensemble.
[ { "created": "Wed, 10 Apr 2019 19:14:04 GMT", "version": "v1" }, { "created": "Tue, 16 Apr 2019 02:24:45 GMT", "version": "v2" }, { "created": "Sat, 17 Aug 2019 12:01:37 GMT", "version": "v3" } ]
2019-08-20
[ [ "Nakahara", "Yuta", "" ], [ "Matsushima", "Toshiyasu", "" ] ]
A spatially "Mt. Fuji" coupled low-density parity check (LDPC) ensemble is a modified version of the original spatially coupled (SC) LDPC ensemble. Its desirable properties are first observed in experimentally. The decoding error probability in the error floor region over the binary erasure channel (BEC) is theoretically analyzed later. In this paper, as the last piece of the theoretical analysis over the BEC, we analyze the decoding error probability in the waterfall region by modifying the covariance evolution which has been used to analyze the original SC-LDPC ensemble.
2402.00293
Takuma Yagi
Takuma Yagi, Misaki Ohashi, Yifei Huang, Ryosuke Furuta, Shungo Adachi, Toutai Mitsuyama, Yoichi Sato
FineBio: A Fine-Grained Video Dataset of Biological Experiments with Hierarchical Annotation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In the development of science, accurate and reproducible documentation of the experimental process is crucial. Automatic recognition of the actions in experiments from videos would help experimenters by complementing the recording of experiments. Towards this goal, we propose FineBio, a new fine-grained video dataset of people performing biological experiments. The dataset consists of multi-view videos of 32 participants performing mock biological experiments with a total duration of 14.5 hours. One experiment forms a hierarchical structure, where a protocol consists of several steps, each further decomposed into a set of atomic operations. The uniqueness of biological experiments is that while they require strict adherence to steps described in each protocol, there is freedom in the order of atomic operations. We provide hierarchical annotation on protocols, steps, atomic operations, object locations, and their manipulation states, providing new challenges for structured activity understanding and hand-object interaction recognition. To find out challenges on activity understanding in biological experiments, we introduce baseline models and results on four different tasks, including (i) step segmentation, (ii) atomic operation detection (iii) object detection, and (iv) manipulated/affected object detection. Dataset and code are available from https://github.com/aistairc/FineBio.
[ { "created": "Thu, 1 Feb 2024 02:47:39 GMT", "version": "v1" } ]
2024-02-02
[ [ "Yagi", "Takuma", "" ], [ "Ohashi", "Misaki", "" ], [ "Huang", "Yifei", "" ], [ "Furuta", "Ryosuke", "" ], [ "Adachi", "Shungo", "" ], [ "Mitsuyama", "Toutai", "" ], [ "Sato", "Yoichi", "" ] ]
In the development of science, accurate and reproducible documentation of the experimental process is crucial. Automatic recognition of the actions in experiments from videos would help experimenters by complementing the recording of experiments. Towards this goal, we propose FineBio, a new fine-grained video dataset of people performing biological experiments. The dataset consists of multi-view videos of 32 participants performing mock biological experiments with a total duration of 14.5 hours. One experiment forms a hierarchical structure, where a protocol consists of several steps, each further decomposed into a set of atomic operations. The uniqueness of biological experiments is that while they require strict adherence to steps described in each protocol, there is freedom in the order of atomic operations. We provide hierarchical annotation on protocols, steps, atomic operations, object locations, and their manipulation states, providing new challenges for structured activity understanding and hand-object interaction recognition. To find out challenges on activity understanding in biological experiments, we introduce baseline models and results on four different tasks, including (i) step segmentation, (ii) atomic operation detection (iii) object detection, and (iv) manipulated/affected object detection. Dataset and code are available from https://github.com/aistairc/FineBio.
2304.06939
Wanrong Zhu
Wanrong Zhu and Jack Hessel and Anas Awadalla and Samir Yitzhak Gadre and Jesse Dodge and Alex Fang and Youngjae Yu and Ludwig Schmidt and William Yang Wang and Yejin Choi
Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text
NeurIPS D&B 2023. Project homepage: https://github.com/allenai/mmc4
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-context vision and language models like Flamingo support arbitrarily interleaved sequences of images and text as input. This format not only enables few-shot learning via interleaving independent supervised (image, text) examples, but also, more complex prompts involving interaction between images, e.g., "What do image A and image B have in common?" To support this interface, pretraining occurs over web corpora that similarly contain interleaved images+text. To date, however, large-scale data of this form have not been publicly available. We release Multimodal C4, an augmentation of the popular text-only C4 corpus with images interleaved. We use a linear assignment algorithm to place images into longer bodies of text using CLIP features, a process that we show outperforms alternatives. Multimodal C4 spans everyday topics like cooking, travel, technology, etc. A manual inspection of a random sample of documents shows that a vast majority (88%) of images are topically relevant, and that linear assignment frequently selects individual sentences specifically well-aligned with each image (80%). After filtering NSFW images, ads, etc., the resulting corpus consists of 101.2M documents with 571M images interleaved in 43B English tokens.
[ { "created": "Fri, 14 Apr 2023 06:17:46 GMT", "version": "v1" }, { "created": "Fri, 9 Jun 2023 21:49:58 GMT", "version": "v2" }, { "created": "Sat, 28 Oct 2023 04:19:41 GMT", "version": "v3" } ]
2023-10-31
[ [ "Zhu", "Wanrong", "" ], [ "Hessel", "Jack", "" ], [ "Awadalla", "Anas", "" ], [ "Gadre", "Samir Yitzhak", "" ], [ "Dodge", "Jesse", "" ], [ "Fang", "Alex", "" ], [ "Yu", "Youngjae", "" ], [ "Schmidt", "Ludwig", "" ], [ "Wang", "William Yang", "" ], [ "Choi", "Yejin", "" ] ]
In-context vision and language models like Flamingo support arbitrarily interleaved sequences of images and text as input. This format not only enables few-shot learning via interleaving independent supervised (image, text) examples, but also, more complex prompts involving interaction between images, e.g., "What do image A and image B have in common?" To support this interface, pretraining occurs over web corpora that similarly contain interleaved images+text. To date, however, large-scale data of this form have not been publicly available. We release Multimodal C4, an augmentation of the popular text-only C4 corpus with images interleaved. We use a linear assignment algorithm to place images into longer bodies of text using CLIP features, a process that we show outperforms alternatives. Multimodal C4 spans everyday topics like cooking, travel, technology, etc. A manual inspection of a random sample of documents shows that a vast majority (88%) of images are topically relevant, and that linear assignment frequently selects individual sentences specifically well-aligned with each image (80%). After filtering NSFW images, ads, etc., the resulting corpus consists of 101.2M documents with 571M images interleaved in 43B English tokens.
1603.05365
B.Sundar Rajan
Anindya Gupta and B. Sundar Rajan
A Relation Between Network Computation and Functional Index Coding Problems
3 figures, 7 tables and 9 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In contrast to the network coding problem wherein the sinks in a network demand subsets of the source messages, in a network computation problem the sinks demand functions of the source messages. Similarly, in the functional index coding problem, the side information and demands of the clients include disjoint sets of functions of the information messages held by the transmitter instead of disjoint subsets of the messages, as is the case in the conventional index coding problem. It is known that any network coding problem can be transformed into an index coding problem and vice versa. In this work, we establish a similar relationship between network computation problems and a class of functional index coding problems, viz., those in which only the demands of the clients include functions of messages. We show that any network computation problem can be converted into a functional index coding problem wherein some clients demand functions of messages and vice versa. We prove that a solution for a network computation problem exists if and only if a functional index code (of a specific length determined by the network computation problem) for a suitably constructed functional index coding problem exists. And, that a functional index coding problem admits a solution of a specified length if and only if a suitably constructed network computation problem admits a solution.
[ { "created": "Thu, 17 Mar 2016 06:31:50 GMT", "version": "v1" } ]
2016-03-18
[ [ "Gupta", "Anindya", "" ], [ "Rajan", "B. Sundar", "" ] ]
In contrast to the network coding problem wherein the sinks in a network demand subsets of the source messages, in a network computation problem the sinks demand functions of the source messages. Similarly, in the functional index coding problem, the side information and demands of the clients include disjoint sets of functions of the information messages held by the transmitter instead of disjoint subsets of the messages, as is the case in the conventional index coding problem. It is known that any network coding problem can be transformed into an index coding problem and vice versa. In this work, we establish a similar relationship between network computation problems and a class of functional index coding problems, viz., those in which only the demands of the clients include functions of messages. We show that any network computation problem can be converted into a functional index coding problem wherein some clients demand functions of messages and vice versa. We prove that a solution for a network computation problem exists if and only if a functional index code (of a specific length determined by the network computation problem) for a suitably constructed functional index coding problem exists. And, that a functional index coding problem admits a solution of a specified length if and only if a suitably constructed network computation problem admits a solution.
1910.06907
Utku Kose
Utku Kose
Techniques for Adversarial Examples Threatening the Safety of Artificial Intelligence Based Systems
International Science and Innovation Congress 2019, pp. 643-655, 13 pages, 10 figures
null
null
null
cs.LG cs.AI math.OC
http://creativecommons.org/licenses/by-sa/4.0/
Artificial intelligence is known as the most effective technological field for rapid developments shaping the future of the world. Even today, it is possible to see intense use of intelligence systems in all fields of the life. Although advantages of the Artificial Intelligence are widely observed, there is also a dark side employing efforts to design hacking oriented techniques against Artificial Intelligence. Thanks to such techniques, it is possible to trick intelligent systems causing directed results for unsuccessful outputs. That is critical for also cyber wars of the future as it is predicted that the wars will be done unmanned, autonomous intelligent systems. Moving from the explanations, objective of this study is to provide information regarding adversarial examples threatening the Artificial Intelligence and focus on details of some techniques, which are used for creating adversarial examples. Adversarial examples are known as training data, which can trick a Machine Learning technique to learn incorrectly about the target problem and cause an unsuccessful or maliciously directed intelligent system at the end. The study enables the readers to learn enough about details of recent techniques for creating adversarial examples.
[ { "created": "Sun, 29 Sep 2019 21:56:59 GMT", "version": "v1" } ]
2019-10-16
[ [ "Kose", "Utku", "" ] ]
Artificial intelligence is known as the most effective technological field for rapid developments shaping the future of the world. Even today, it is possible to see intense use of intelligence systems in all fields of the life. Although advantages of the Artificial Intelligence are widely observed, there is also a dark side employing efforts to design hacking oriented techniques against Artificial Intelligence. Thanks to such techniques, it is possible to trick intelligent systems causing directed results for unsuccessful outputs. That is critical for also cyber wars of the future as it is predicted that the wars will be done unmanned, autonomous intelligent systems. Moving from the explanations, objective of this study is to provide information regarding adversarial examples threatening the Artificial Intelligence and focus on details of some techniques, which are used for creating adversarial examples. Adversarial examples are known as training data, which can trick a Machine Learning technique to learn incorrectly about the target problem and cause an unsuccessful or maliciously directed intelligent system at the end. The study enables the readers to learn enough about details of recent techniques for creating adversarial examples.
2010.13266
Ramya Srinivasan
Ramya Srinivasan, Kanji Uchino
Biases in Generative Art -- A Causal Look from the Lens of Art History
ACM FAccT March 3--10, 2021, Virtual Event, Canada
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
With rapid progress in artificial intelligence (AI), popularity of generative art has grown substantially. From creating paintings to generating novel art styles, AI based generative art has showcased a variety of applications. However, there has been little focus concerning the ethical impacts of AI based generative art. In this work, we investigate biases in the generative art AI pipeline right from those that can originate due to improper problem formulation to those related to algorithm design. Viewing from the lens of art history, we discuss the socio-cultural impacts of these biases. Leveraging causal models, we highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases. We illustrate the same through case studies, in particular those related to style transfer. To the best of our knowledge, this is the first extensive analysis that investigates biases in the generative art AI pipeline from the perspective of art history. We hope our work sparks interdisciplinary discussions related to accountability of generative art.
[ { "created": "Mon, 26 Oct 2020 00:49:09 GMT", "version": "v1" }, { "created": "Tue, 16 Feb 2021 19:01:11 GMT", "version": "v2" } ]
2021-02-18
[ [ "Srinivasan", "Ramya", "" ], [ "Uchino", "Kanji", "" ] ]
With rapid progress in artificial intelligence (AI), popularity of generative art has grown substantially. From creating paintings to generating novel art styles, AI based generative art has showcased a variety of applications. However, there has been little focus concerning the ethical impacts of AI based generative art. In this work, we investigate biases in the generative art AI pipeline right from those that can originate due to improper problem formulation to those related to algorithm design. Viewing from the lens of art history, we discuss the socio-cultural impacts of these biases. Leveraging causal models, we highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases. We illustrate the same through case studies, in particular those related to style transfer. To the best of our knowledge, this is the first extensive analysis that investigates biases in the generative art AI pipeline from the perspective of art history. We hope our work sparks interdisciplinary discussions related to accountability of generative art.
1807.00316
Andrii Striuk
Andrii Striuk
Using Elements Of Semantic Parsing In E-Learning Environments
null
Information Technologies and Knowledge (2007) 297-299
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Possibilities for using semantic parsing to estimate the correspondence of text materials to teaching aims, correspondence of test task to theoretical materials and other problems arising during the distance course designing and educational process itself in e-learning environments.
[ { "created": "Sun, 1 Jul 2018 11:26:43 GMT", "version": "v1" } ]
2018-07-03
[ [ "Striuk", "Andrii", "" ] ]
Possibilities for using semantic parsing to estimate the correspondence of text materials to teaching aims, correspondence of test task to theoretical materials and other problems arising during the distance course designing and educational process itself in e-learning environments.
1603.05335
Delu Zeng
Tong Zhao, Lin Li, Xinghao Ding, Yue Huang and Delu Zeng
Saliency Detection with Spaces of Background-based Distribution
5 pages, 6 figures, Accepted by IEEE Signal Processing Letters in March 2016
null
10.1109/LSP.2016.2544781
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this letter, an effective image saliency detection method is proposed by constructing some novel spaces to model the background and redefine the distance of the salient patches away from the background. Concretely, given the backgroundness prior, eigendecomposition is utilized to create four spaces of background-based distribution (SBD) to model the background, in which a more appropriate metric (Mahalanobis distance) is quoted to delicately measure the saliency of every image patch away from the background. After that, a coarse saliency map is obtained by integrating the four adjusted Mahalanobis distance maps, each of which is formed by the distances between all the patches and background in the corresponding SBD. To be more discriminative, the coarse saliency map is further enhanced into the posterior probability map within Bayesian perspective. Finally, the final saliency map is generated by properly refining the posterior probability map with geodesic distance. Experimental results on two usual datasets show that the proposed method is effective compared with the state-of-the-art algorithms.
[ { "created": "Thu, 17 Mar 2016 02:18:30 GMT", "version": "v1" } ]
2016-05-04
[ [ "Zhao", "Tong", "" ], [ "Li", "Lin", "" ], [ "Ding", "Xinghao", "" ], [ "Huang", "Yue", "" ], [ "Zeng", "Delu", "" ] ]
In this letter, an effective image saliency detection method is proposed by constructing some novel spaces to model the background and redefine the distance of the salient patches away from the background. Concretely, given the backgroundness prior, eigendecomposition is utilized to create four spaces of background-based distribution (SBD) to model the background, in which a more appropriate metric (Mahalanobis distance) is quoted to delicately measure the saliency of every image patch away from the background. After that, a coarse saliency map is obtained by integrating the four adjusted Mahalanobis distance maps, each of which is formed by the distances between all the patches and background in the corresponding SBD. To be more discriminative, the coarse saliency map is further enhanced into the posterior probability map within Bayesian perspective. Finally, the final saliency map is generated by properly refining the posterior probability map with geodesic distance. Experimental results on two usual datasets show that the proposed method is effective compared with the state-of-the-art algorithms.
1304.3438
Alan Bundy
Alan Bundy
Incidence Calculus: A Mechanism for Probabilistic Reasoning
Appears in Proceedings of the First Conference on Uncertainty in Artificial Intelligence (UAI1985)
null
null
UAI-P-1985-PG-177-184
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanisms for the automation of uncertainty are required for expert systems. Sometimes these mechanisms need to obey the properties of probabilistic reasoning. A purely numeric mechanism, like those proposed so far, cannot provide a probabilistic logic with truth functional connectives. We propose an alternative mechanism, Incidence Calculus, which is based on a representation of uncertainty using sets of points, which might represent situations, models or possible worlds. Incidence Calculus does provide a probabilistic logic with truth functional connectives.
[ { "created": "Wed, 27 Mar 2013 19:57:29 GMT", "version": "v1" } ]
2013-04-15
[ [ "Bundy", "Alan", "" ] ]
Mechanisms for the automation of uncertainty are required for expert systems. Sometimes these mechanisms need to obey the properties of probabilistic reasoning. A purely numeric mechanism, like those proposed so far, cannot provide a probabilistic logic with truth functional connectives. We propose an alternative mechanism, Incidence Calculus, which is based on a representation of uncertainty using sets of points, which might represent situations, models or possible worlds. Incidence Calculus does provide a probabilistic logic with truth functional connectives.
2009.02934
Anna Frid
Anna E. Frid, Enzo Laborde, Jarkko Peltom\"aki
On prefix palindromic length of automatic words
revised version, to appear in Theoret. Comput. Sci
null
null
null
cs.FL cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prefix palindromic length $\mathrm{PPL}_{\mathbf{u}}(n)$ of an infinite word $\mathbf{u}$ is the minimal number of concatenated palindromes needed to express the prefix of length $n$ of $\mathbf{u}$. Since 2013, it is still unknown if $\mathrm{PPL}_{\mathbf{u}}(n)$ is unbounded for every aperiodic infinite word $\mathbf{u}$, even though this has been proven for almost all aperiodic words. At the same time, the only well-known nontrivial infinite word for which the function $\mathrm{PPL}_{\mathbf{u}}(n)$ has been precisely computed is the Thue-Morse word $\mathbf{t}$. This word is $2$-automatic and, predictably, its function $\mathrm{PPL}_{\mathbf{t}}(n)$ is $2$-regular, but is this the case for all automatic words? In this paper, we prove that this function is $k$-regular for every $k$-automatic word containing only a finite number of palindromes. For two such words, namely the paperfolding word and the Rudin-Shapiro word, we derive a formula for this function. Our computational experiments suggest that generally this is not true: for the period-doubling word, the prefix palindromic length does not look $2$-regular, and for the Fibonacci word, it does not look Fibonacci-regular. If proven, these results would give rare (if not first) examples of a natural function of an automatic word which is not regular.
[ { "created": "Mon, 7 Sep 2020 08:09:32 GMT", "version": "v1" }, { "created": "Wed, 9 Jun 2021 10:14:06 GMT", "version": "v2" } ]
2021-06-10
[ [ "Frid", "Anna E.", "" ], [ "Laborde", "Enzo", "" ], [ "Peltomäki", "Jarkko", "" ] ]
The prefix palindromic length $\mathrm{PPL}_{\mathbf{u}}(n)$ of an infinite word $\mathbf{u}$ is the minimal number of concatenated palindromes needed to express the prefix of length $n$ of $\mathbf{u}$. Since 2013, it is still unknown if $\mathrm{PPL}_{\mathbf{u}}(n)$ is unbounded for every aperiodic infinite word $\mathbf{u}$, even though this has been proven for almost all aperiodic words. At the same time, the only well-known nontrivial infinite word for which the function $\mathrm{PPL}_{\mathbf{u}}(n)$ has been precisely computed is the Thue-Morse word $\mathbf{t}$. This word is $2$-automatic and, predictably, its function $\mathrm{PPL}_{\mathbf{t}}(n)$ is $2$-regular, but is this the case for all automatic words? In this paper, we prove that this function is $k$-regular for every $k$-automatic word containing only a finite number of palindromes. For two such words, namely the paperfolding word and the Rudin-Shapiro word, we derive a formula for this function. Our computational experiments suggest that generally this is not true: for the period-doubling word, the prefix palindromic length does not look $2$-regular, and for the Fibonacci word, it does not look Fibonacci-regular. If proven, these results would give rare (if not first) examples of a natural function of an automatic word which is not regular.
2206.12464
Charalambos Poullis
Qiao Chen, Charalambos Poullis
Motion Estimation for Large Displacements and Deformations
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large displacement optical flow is an integral part of many computer vision tasks. Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness, making them sensitive to noise in the sparse matches, deformations, and arbitrarily large displacements. This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations. A multi-scale hybrid matching approach is performed on the image pairs. Coarse-scale clusters formed by classifying pixels according to their feature descriptors are matched using the clusters' context descriptors. We apply a multi-scale graph matching on the finer-scale superpixels contained within each matched pair of coarse-scale clusters. Small clusters that cannot be further subdivided are matched using localized feature matching. Together, these initial matches form the flow, which is propagated by an edge-preserving interpolation and variational refinement. Our approach does not require training and is robust to substantial displacements and rigid and non-rigid transformations due to motion in the scene, making it ideal for large-scale imagery such as Wide-Area Motion Imagery (WAMI). More notably, HybridFlow works on directed graphs of arbitrary topology representing perceptual groups, which improves motion estimation in the presence of significant deformations. We demonstrate HybridFlow's superior performance to state-of-the-art variational techniques on two benchmark datasets and report comparable results with state-of-the-art deep-learning-based techniques.
[ { "created": "Fri, 24 Jun 2022 18:53:22 GMT", "version": "v1" } ]
2022-06-28
[ [ "Chen", "Qiao", "" ], [ "Poullis", "Charalambos", "" ] ]
Large displacement optical flow is an integral part of many computer vision tasks. Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness, making them sensitive to noise in the sparse matches, deformations, and arbitrarily large displacements. This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations. A multi-scale hybrid matching approach is performed on the image pairs. Coarse-scale clusters formed by classifying pixels according to their feature descriptors are matched using the clusters' context descriptors. We apply a multi-scale graph matching on the finer-scale superpixels contained within each matched pair of coarse-scale clusters. Small clusters that cannot be further subdivided are matched using localized feature matching. Together, these initial matches form the flow, which is propagated by an edge-preserving interpolation and variational refinement. Our approach does not require training and is robust to substantial displacements and rigid and non-rigid transformations due to motion in the scene, making it ideal for large-scale imagery such as Wide-Area Motion Imagery (WAMI). More notably, HybridFlow works on directed graphs of arbitrary topology representing perceptual groups, which improves motion estimation in the presence of significant deformations. We demonstrate HybridFlow's superior performance to state-of-the-art variational techniques on two benchmark datasets and report comparable results with state-of-the-art deep-learning-based techniques.
1606.09296
Mihajlo Grbovic
Mihajlo Grbovic, Guy Halawi, Zohar Karnin, Yoelle Maarek
How Many Folders Do You Really Need?
10 pages, 12 figures, Proceedings of the 23rd ACM International Conference on Information and Knowledge Management (CIKM 2014), Shanghai, China
Proceedings of the 23rd ACM International Conference on Information and Knowledge Management (CIKM 2014), Shanghai, China
10.1145/2661829.2662018.
null
cs.AI cs.HC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 "latent" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.
[ { "created": "Wed, 29 Jun 2016 21:35:24 GMT", "version": "v1" } ]
2016-07-01
[ [ "Grbovic", "Mihajlo", "" ], [ "Halawi", "Guy", "" ], [ "Karnin", "Zohar", "" ], [ "Maarek", "Yoelle", "" ] ]
Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 "latent" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.
1808.05488
Lukas Cavigelli
Lukas Cavigelli, Luca Benini
CBinfer: Exploiting Frame-to-Frame Locality for Faster Convolutional Network Inference on Video Streams
arXiv admin note: substantial text overlap with arXiv:1704.04313
null
null
null
cs.CV cs.AI cs.NE eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The last few years have brought advances in computer vision at an amazing pace, grounded on new findings in deep neural network construction and training as well as the availability of large labeled datasets. Applying these networks to images demands a high computational effort and pushes the use of state-of-the-art networks on real-time video data out of reach of embedded platforms. Many recent works focus on reducing network complexity for real-time inference on embedded computing platforms. We adopt an orthogonal viewpoint and propose a novel algorithm exploiting the spatio-temporal sparsity of pixel changes. This optimized inference procedure resulted in an average speed-up of 9.1x over cuDNN on the Tegra X2 platform at a negligible accuracy loss of <0.1% and no retraining of the network for a semantic segmentation application. Similarly, an average speed-up of 7.0x has been achieved for a pose detection DNN and a reduction of 5x of the number of arithmetic operations to be performed for object detection on static camera video surveillance data. These throughput gains combined with a lower power consumption result in an energy efficiency of 511 GOp/s/W compared to 70 GOp/s/W for the baseline.
[ { "created": "Wed, 15 Aug 2018 15:27:29 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2019 17:07:31 GMT", "version": "v2" } ]
2019-03-05
[ [ "Cavigelli", "Lukas", "" ], [ "Benini", "Luca", "" ] ]
The last few years have brought advances in computer vision at an amazing pace, grounded on new findings in deep neural network construction and training as well as the availability of large labeled datasets. Applying these networks to images demands a high computational effort and pushes the use of state-of-the-art networks on real-time video data out of reach of embedded platforms. Many recent works focus on reducing network complexity for real-time inference on embedded computing platforms. We adopt an orthogonal viewpoint and propose a novel algorithm exploiting the spatio-temporal sparsity of pixel changes. This optimized inference procedure resulted in an average speed-up of 9.1x over cuDNN on the Tegra X2 platform at a negligible accuracy loss of <0.1% and no retraining of the network for a semantic segmentation application. Similarly, an average speed-up of 7.0x has been achieved for a pose detection DNN and a reduction of 5x of the number of arithmetic operations to be performed for object detection on static camera video surveillance data. These throughput gains combined with a lower power consumption result in an energy efficiency of 511 GOp/s/W compared to 70 GOp/s/W for the baseline.
1604.00427
Yu-Chuan Su
Yu-Chuan Su, Kristen Grauman
Leaving Some Stones Unturned: Dynamic Feature Prioritization for Activity Detection in Streaming Video
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current approaches for activity recognition often ignore constraints on computational resources: 1) they rely on extensive feature computation to obtain rich descriptors on all frames, and 2) they assume batch-mode access to the entire test video at once. We propose a new active approach to activity recognition that prioritizes "what to compute when" in order to make timely predictions. The main idea is to learn a policy that dynamically schedules the sequence of features to compute on selected frames of a given test video. In contrast to traditional static feature selection, our approach continually re-prioritizes computation based on the accumulated history of observations and accounts for the transience of those observations in ongoing video. We develop variants to handle both the batch and streaming settings. On two challenging datasets, our method provides significantly better accuracy than alternative techniques for a wide range of computational budgets.
[ { "created": "Fri, 1 Apr 2016 22:37:28 GMT", "version": "v1" } ]
2016-04-05
[ [ "Su", "Yu-Chuan", "" ], [ "Grauman", "Kristen", "" ] ]
Current approaches for activity recognition often ignore constraints on computational resources: 1) they rely on extensive feature computation to obtain rich descriptors on all frames, and 2) they assume batch-mode access to the entire test video at once. We propose a new active approach to activity recognition that prioritizes "what to compute when" in order to make timely predictions. The main idea is to learn a policy that dynamically schedules the sequence of features to compute on selected frames of a given test video. In contrast to traditional static feature selection, our approach continually re-prioritizes computation based on the accumulated history of observations and accounts for the transience of those observations in ongoing video. We develop variants to handle both the batch and streaming settings. On two challenging datasets, our method provides significantly better accuracy than alternative techniques for a wide range of computational budgets.
1310.3556
Abhisek Kundu
Abhisek Kundu, Srinivas Nambirajan, Petros Drineas
Identifying Influential Entries in a Matrix
There is a bug in the proof of Lemma 5, which we are currently working to fix
null
null
null
cs.NA cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For any matrix A in R^(m x n) of rank \rho, we present a probability distribution over the entries of A (the element-wise leverage scores of equation (2)) that reveals the most influential entries in the matrix. From a theoretical perspective, we prove that sampling at most s = O ((m + n) \rho^2 ln (m + n)) entries of the matrix (see eqn. (3) for the precise value of s) with respect to these scores and solving the nuclear norm minimization problem on the sampled entries, reconstructs A exactly. To the best of our knowledge, these are the strongest theoretical guarantees on matrix completion without any incoherence assumptions on the matrix A. From an experimental perspective, we show that entries corresponding to high element-wise leverage scores reveal structural properties of the data matrix that are of interest to domain scientists.
[ { "created": "Mon, 14 Oct 2013 03:49:02 GMT", "version": "v1" }, { "created": "Sat, 14 Dec 2013 12:13:32 GMT", "version": "v2" } ]
2013-12-17
[ [ "Kundu", "Abhisek", "" ], [ "Nambirajan", "Srinivas", "" ], [ "Drineas", "Petros", "" ] ]
For any matrix A in R^(m x n) of rank \rho, we present a probability distribution over the entries of A (the element-wise leverage scores of equation (2)) that reveals the most influential entries in the matrix. From a theoretical perspective, we prove that sampling at most s = O ((m + n) \rho^2 ln (m + n)) entries of the matrix (see eqn. (3) for the precise value of s) with respect to these scores and solving the nuclear norm minimization problem on the sampled entries, reconstructs A exactly. To the best of our knowledge, these are the strongest theoretical guarantees on matrix completion without any incoherence assumptions on the matrix A. From an experimental perspective, we show that entries corresponding to high element-wise leverage scores reveal structural properties of the data matrix that are of interest to domain scientists.
cs/9810011
Dr. Wolfram Hardt
Wolfram Hardt, Bernd Kleinjohann
Flysig: Dataflow Oriented Delay-Insensitive Processor for Rapid Prototyping of Signal Processing
6 pages, 10 figures
Nineth IEEE International Workshop on Rapid System Prototyping 1998, Belgium, IEEE Computer Society Press
10.1109/IWRSP.1998.676682
null
cs.AR
null
As the one-chip integration of HW-modules designed by different companies becomes more and more popular reliability of a HW-design and evaluation of the timing behavior during the prototype stage are absolutely necessary. One way to guarantee reliability is the use of robust design styles, e.g., delay-insensitivity. For early timing evaluation two aspects must be considered: a) The timing needs to be proportional to technology variations and b) the implemented architecture should be identical for prototype and target. The first can be met also by delay-insensitive implementation. The latter one is the key point. A unified architecture is needed for prototyping as well as implementation. Our new approach to rapid prototyping of signal processing tasks is based on a configurable, delay-insensitive implemented processor called Flysig. In essence, the Flysig processor can be understood as a complex FPGA where the CLBs are substituted by bit-serial operators. In this paper the general concept is detailed and first experimental results are given for demonstration of the main advantages: delay-insensitive design style, direct correspondence between prototyping and target architecture, high performance and reasonable shortening of the design cycle.
[ { "created": "Mon, 12 Oct 1998 10:11:05 GMT", "version": "v1" } ]
2016-11-17
[ [ "Hardt", "Wolfram", "" ], [ "Kleinjohann", "Bernd", "" ] ]
As the one-chip integration of HW-modules designed by different companies becomes more and more popular reliability of a HW-design and evaluation of the timing behavior during the prototype stage are absolutely necessary. One way to guarantee reliability is the use of robust design styles, e.g., delay-insensitivity. For early timing evaluation two aspects must be considered: a) The timing needs to be proportional to technology variations and b) the implemented architecture should be identical for prototype and target. The first can be met also by delay-insensitive implementation. The latter one is the key point. A unified architecture is needed for prototyping as well as implementation. Our new approach to rapid prototyping of signal processing tasks is based on a configurable, delay-insensitive implemented processor called Flysig. In essence, the Flysig processor can be understood as a complex FPGA where the CLBs are substituted by bit-serial operators. In this paper the general concept is detailed and first experimental results are given for demonstration of the main advantages: delay-insensitive design style, direct correspondence between prototyping and target architecture, high performance and reasonable shortening of the design cycle.
2207.03928
Matteo Manica
Matteo Manica, Jannis Born, Joris Cadow, Dimitrios Christofidellis, Ashish Dave, Dean Clarke, Yves Gaetan Nana Teukam, Giorgio Giannone, Samuel C. Hoffman, Matthew Buchan, Vijil Chenthamarakshan, Timothy Donovan, Hsiang Han Hsu, Federico Zipoli, Oliver Schilter, Akihiro Kishimoto, Lisa Hamada, Inkit Padhi, Karl Wehden, Lauren McHugh, Alexy Khrabrov, Payel Das, Seiji Takeda, and John R. Smith
Accelerating Material Design with the Generative Toolkit for Scientific Discovery
15 pages, 2 figures
Nature Partner Journals (npj) Computational Materials 9, 69 (2023)
10.1038/s41524-023-01028-1
null
cs.LG cs.AI cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
With the growing availability of data within various scientific domains, generative models hold enormous potential to accelerate scientific discovery. They harness powerful representations learned from datasets to speed up the formulation of novel hypotheses with the potential to impact material discovery broadly. We present the Generative Toolkit for Scientific Discovery (GT4SD). This extensible open-source library enables scientists, developers, and researchers to train and use state-of-the-art generative models to accelerate scientific discovery focused on material design.
[ { "created": "Fri, 8 Jul 2022 14:28:13 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2022 12:37:05 GMT", "version": "v2" }, { "created": "Thu, 1 Dec 2022 21:49:03 GMT", "version": "v3" }, { "created": "Tue, 31 Jan 2023 12:37:12 GMT", "version": "v4" } ]
2023-08-25
[ [ "Manica", "Matteo", "" ], [ "Born", "Jannis", "" ], [ "Cadow", "Joris", "" ], [ "Christofidellis", "Dimitrios", "" ], [ "Dave", "Ashish", "" ], [ "Clarke", "Dean", "" ], [ "Teukam", "Yves Gaetan Nana", "" ], [ "Giannone", "Giorgio", "" ], [ "Hoffman", "Samuel C.", "" ], [ "Buchan", "Matthew", "" ], [ "Chenthamarakshan", "Vijil", "" ], [ "Donovan", "Timothy", "" ], [ "Hsu", "Hsiang Han", "" ], [ "Zipoli", "Federico", "" ], [ "Schilter", "Oliver", "" ], [ "Kishimoto", "Akihiro", "" ], [ "Hamada", "Lisa", "" ], [ "Padhi", "Inkit", "" ], [ "Wehden", "Karl", "" ], [ "McHugh", "Lauren", "" ], [ "Khrabrov", "Alexy", "" ], [ "Das", "Payel", "" ], [ "Takeda", "Seiji", "" ], [ "Smith", "John R.", "" ] ]
With the growing availability of data within various scientific domains, generative models hold enormous potential to accelerate scientific discovery. They harness powerful representations learned from datasets to speed up the formulation of novel hypotheses with the potential to impact material discovery broadly. We present the Generative Toolkit for Scientific Discovery (GT4SD). This extensible open-source library enables scientists, developers, and researchers to train and use state-of-the-art generative models to accelerate scientific discovery focused on material design.
1906.03683
Kuan-Hui Lee
Kuan-Hui Lee, Takaaki Tagawa, Jia-En M. Pan, Adrien Gaidon, Bertrand Douillard
An Attention-based Recurrent Convolutional Network for Vehicle Taillight Recognition
Accepted by IV 2019
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle taillight recognition is an important application for automated driving, especially for intent prediction of ado vehicles and trajectory planning of the ego vehicle. In this work, we propose an end-to-end deep learning framework to recognize taillights, i.e. rear turn and brake signals, from a sequence of images. The proposed method starts with a Convolutional Neural Network (CNN) to extract spatial features, and then applies a Long Short-Term Memory network (LSTM) to learn temporal dependencies. Furthermore, we integrate attention models in both spatial and temporal domains, where the attention models learn to selectively focus on both spatial and temporal features. Our method is able to outperform the state of the art in terms of accuracy on the UC Merced Vehicle Rear Signal Dataset, demonstrating the effectiveness of attention models for vehicle taillight recognition.
[ { "created": "Sun, 9 Jun 2019 18:08:49 GMT", "version": "v1" } ]
2019-06-11
[ [ "Lee", "Kuan-Hui", "" ], [ "Tagawa", "Takaaki", "" ], [ "Pan", "Jia-En M.", "" ], [ "Gaidon", "Adrien", "" ], [ "Douillard", "Bertrand", "" ] ]
Vehicle taillight recognition is an important application for automated driving, especially for intent prediction of ado vehicles and trajectory planning of the ego vehicle. In this work, we propose an end-to-end deep learning framework to recognize taillights, i.e. rear turn and brake signals, from a sequence of images. The proposed method starts with a Convolutional Neural Network (CNN) to extract spatial features, and then applies a Long Short-Term Memory network (LSTM) to learn temporal dependencies. Furthermore, we integrate attention models in both spatial and temporal domains, where the attention models learn to selectively focus on both spatial and temporal features. Our method is able to outperform the state of the art in terms of accuracy on the UC Merced Vehicle Rear Signal Dataset, demonstrating the effectiveness of attention models for vehicle taillight recognition.
1809.03392
Masahiro Okubo
Masahiro Okubo, Tesshu Hanaka, Hirotaka Ono
Optimal Partition of a Tree with Social Distance
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem to find a partition of \textcolor{black}{a} graph $G$ with maximum social welfare based on social distance between vertices in $G$, called MaxSWP. This problem is known to be NP-hard in general. In this paper, we first give a complete characterization of optimal partitions of trees with small diameters. Then, by utilizing these results, we show that MaxSWP can be solved in linear time for trees. Moreover, we show that MaxSWP is NP-hard even for 4-regular graphs.
[ { "created": "Mon, 10 Sep 2018 15:24:00 GMT", "version": "v1" }, { "created": "Tue, 18 Sep 2018 09:04:54 GMT", "version": "v2" }, { "created": "Fri, 21 Sep 2018 07:11:52 GMT", "version": "v3" }, { "created": "Mon, 12 Nov 2018 05:26:16 GMT", "version": "v4" } ]
2018-11-13
[ [ "Okubo", "Masahiro", "" ], [ "Hanaka", "Tesshu", "" ], [ "Ono", "Hirotaka", "" ] ]
We study the problem to find a partition of \textcolor{black}{a} graph $G$ with maximum social welfare based on social distance between vertices in $G$, called MaxSWP. This problem is known to be NP-hard in general. In this paper, we first give a complete characterization of optimal partitions of trees with small diameters. Then, by utilizing these results, we show that MaxSWP can be solved in linear time for trees. Moreover, we show that MaxSWP is NP-hard even for 4-regular graphs.
1307.6626
Zhixiong Chen
Zhixiong Chen, Zhihua Niu, Chenhuang Wu
On the $k$-error linear complexity of binary sequences derived from polynomial quotients
2 figures
null
10.1007/s11432-014-5220-7
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the $k$-error linear complexity of $p^2$-periodic binary sequences defined from the polynomial quotients (including the well-studied Fermat quotients), which is defined by $$ q_{p,w}(u)\equiv \frac{u^w-u^{wp}}{p} \bmod p ~ \mathrm{with} 0 \le q_{p,w}(u) \le p-1, ~u\ge 0, $$ where $p$ is an odd prime and $1\le w<p$. Indeed, first for all integers $k$, we determine exact values of the $k$-error linear complexity over the finite field $\F_2$ for these binary sequences under the assumption of f2 being a primitive root modulo $p^2$, and then we determine their $k$-error linear complexity over the finite field $\F_p$ for either $0\le k<p$ when $w=1$ or $0\le k<p-1$ when $2\le w<p$. Theoretical results obtained indicate that such sequences possess `good' error linear complexity.
[ { "created": "Thu, 25 Jul 2013 03:28:42 GMT", "version": "v1" } ]
2016-03-15
[ [ "Chen", "Zhixiong", "" ], [ "Niu", "Zhihua", "" ], [ "Wu", "Chenhuang", "" ] ]
We investigate the $k$-error linear complexity of $p^2$-periodic binary sequences defined from the polynomial quotients (including the well-studied Fermat quotients), which is defined by $$ q_{p,w}(u)\equiv \frac{u^w-u^{wp}}{p} \bmod p ~ \mathrm{with} 0 \le q_{p,w}(u) \le p-1, ~u\ge 0, $$ where $p$ is an odd prime and $1\le w<p$. Indeed, first for all integers $k$, we determine exact values of the $k$-error linear complexity over the finite field $\F_2$ for these binary sequences under the assumption of f2 being a primitive root modulo $p^2$, and then we determine their $k$-error linear complexity over the finite field $\F_p$ for either $0\le k<p$ when $w=1$ or $0\le k<p-1$ when $2\le w<p$. Theoretical results obtained indicate that such sequences possess `good' error linear complexity.
2406.04546
Sabri Mustafa Kahya
Sabri Mustafa Kahya, Boran Hamdi Sivrikaya, Muhammet Sami Yavuz, Eckehard Steinbach
FOOD: Facial Authentication and Out-of-Distribution Detection with Short-Range FMCW Radar
Accepted at ICIP 2024
null
null
null
cs.CV cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a short-range FMCW radar-based facial authentication and out-of-distribution (OOD) detection framework. Our pipeline jointly estimates the correct classes for the in-distribution (ID) samples and detects the OOD samples to prevent their inaccurate prediction. Our reconstruction-based architecture consists of a main convolutional block with one encoder and multi-decoder configuration, and intermediate linear encoder-decoder parts. Together, these elements form an accurate human face classifier and a robust OOD detector. For our dataset, gathered using a 60 GHz short-range FMCW radar, our network achieves an average classification accuracy of 98.07% in identifying in-distribution human faces. As an OOD detector, it achieves an average Area Under the Receiver Operating Characteristic (AUROC) curve of 98.50% and an average False Positive Rate at 95% True Positive Rate (FPR95) of 6.20%. Also, our extensive experiments show that the proposed approach outperforms previous OOD detectors in terms of common OOD detection metrics.
[ { "created": "Thu, 6 Jun 2024 23:08:03 GMT", "version": "v1" } ]
2024-06-10
[ [ "Kahya", "Sabri Mustafa", "" ], [ "Sivrikaya", "Boran Hamdi", "" ], [ "Yavuz", "Muhammet Sami", "" ], [ "Steinbach", "Eckehard", "" ] ]
This paper proposes a short-range FMCW radar-based facial authentication and out-of-distribution (OOD) detection framework. Our pipeline jointly estimates the correct classes for the in-distribution (ID) samples and detects the OOD samples to prevent their inaccurate prediction. Our reconstruction-based architecture consists of a main convolutional block with one encoder and multi-decoder configuration, and intermediate linear encoder-decoder parts. Together, these elements form an accurate human face classifier and a robust OOD detector. For our dataset, gathered using a 60 GHz short-range FMCW radar, our network achieves an average classification accuracy of 98.07% in identifying in-distribution human faces. As an OOD detector, it achieves an average Area Under the Receiver Operating Characteristic (AUROC) curve of 98.50% and an average False Positive Rate at 95% True Positive Rate (FPR95) of 6.20%. Also, our extensive experiments show that the proposed approach outperforms previous OOD detectors in terms of common OOD detection metrics.
2211.08029
Amirhossein Abaskohi
Amirhossein Abaskohi, Nazanin Sabri, Behnam Bahrak
Persian Emotion Detection using ParsBERT and Imbalanced Data Handling Approaches
14 pages, 5 figures, 9 tables
ACM Transactions on Asian and Low-Resource Language Information Processing 2022
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Emotion recognition is one of the machine learning applications which can be done using text, speech, or image data gathered from social media spaces. Detecting emotion can help us in different fields, including opinion mining. With the spread of social media, different platforms like Twitter have become data sources, and the language used in these platforms is informal, making the emotion detection task difficult. EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language. These datasets, especially EmoPars, are suffering from inequality between several samples between two classes. In this paper, we evaluate EmoPars and compare them with ArmanEmo. Throughout this analysis, we use data augmentation techniques, data re-sampling, and class-weights with Transformer-based Pretrained Language Models(PLMs) to handle the imbalance problem of these datasets. Moreover, feature selection is used to enhance the models' performance by emphasizing the text's specific features. In addition, we provide a new policy for selecting data from EmoPars, which selects the high-confidence samples; as a result, the model does not see samples that do not have specific emotion during training. Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively, which are new state-of-the-art results in these benchmarks.
[ { "created": "Tue, 15 Nov 2022 10:22:49 GMT", "version": "v1" }, { "created": "Thu, 17 Nov 2022 12:13:11 GMT", "version": "v2" } ]
2022-11-21
[ [ "Abaskohi", "Amirhossein", "" ], [ "Sabri", "Nazanin", "" ], [ "Bahrak", "Behnam", "" ] ]
Emotion recognition is one of the machine learning applications which can be done using text, speech, or image data gathered from social media spaces. Detecting emotion can help us in different fields, including opinion mining. With the spread of social media, different platforms like Twitter have become data sources, and the language used in these platforms is informal, making the emotion detection task difficult. EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language. These datasets, especially EmoPars, are suffering from inequality between several samples between two classes. In this paper, we evaluate EmoPars and compare them with ArmanEmo. Throughout this analysis, we use data augmentation techniques, data re-sampling, and class-weights with Transformer-based Pretrained Language Models(PLMs) to handle the imbalance problem of these datasets. Moreover, feature selection is used to enhance the models' performance by emphasizing the text's specific features. In addition, we provide a new policy for selecting data from EmoPars, which selects the high-confidence samples; as a result, the model does not see samples that do not have specific emotion during training. Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively, which are new state-of-the-art results in these benchmarks.
1907.08083
Azqa Nadeem
Azqa Nadeem, and Marianne Junger
Laptop Theft in a University Setting can be Avoided with Warnings
The results in this paper are erroneous. Due to selection bias, the results are not statistically significant
null
null
null
cs.CY cs.CR
http://creativecommons.org/licenses/by/4.0/
Laptops have become an indispensable asset in today's digital age. They often contain highly sensitive information, such as credentials and confidential documents. As a result, the value of a laptop is an accumulation of the value of both the physical device itself and the cyber assets it contains, making it a lucrative target for theft. Educational institutions have a large population of potential victims of laptop theft. To mitigate this risk, we investigate whether a simple warning sign can reduce the opportunity for potential offenders. To this end, we have conducted an empirical study to observe the prevalence of students/staff leaving their laptops unattended at a university study hall at the Delft University of Technology in the Netherlands, both with and without a warning sign. We observed 148 out of 220 subjects leaving their laptops unattended in just three weeks. The results also showed that without the warning banner, 75.5% (83 out of 110) of subjects left their laptops unattended and with the warning, only 59.1% (65 out of 110) of subjects showed the same behavior, which is a significant reduction of 16.4%. In addition, a qualitative analysis was performed on the responses of subjects who left their laptops unattended after the warning banner was placed. The results showed a mix of convenience, and a blind trust on the safety of the faculty. In conclusion, a simple banner was effective in reducing the opportunity for laptop theft. However, the percentage of laptops left unattended was still high even after the introduction of the banner.
[ { "created": "Thu, 18 Jul 2019 14:36:52 GMT", "version": "v1" }, { "created": "Tue, 6 Jul 2021 17:06:57 GMT", "version": "v2" }, { "created": "Fri, 4 Nov 2022 10:51:49 GMT", "version": "v3" } ]
2022-11-07
[ [ "Nadeem", "Azqa", "" ], [ "Junger", "Marianne", "" ] ]
Laptops have become an indispensable asset in today's digital age. They often contain highly sensitive information, such as credentials and confidential documents. As a result, the value of a laptop is an accumulation of the value of both the physical device itself and the cyber assets it contains, making it a lucrative target for theft. Educational institutions have a large population of potential victims of laptop theft. To mitigate this risk, we investigate whether a simple warning sign can reduce the opportunity for potential offenders. To this end, we have conducted an empirical study to observe the prevalence of students/staff leaving their laptops unattended at a university study hall at the Delft University of Technology in the Netherlands, both with and without a warning sign. We observed 148 out of 220 subjects leaving their laptops unattended in just three weeks. The results also showed that without the warning banner, 75.5% (83 out of 110) of subjects left their laptops unattended and with the warning, only 59.1% (65 out of 110) of subjects showed the same behavior, which is a significant reduction of 16.4%. In addition, a qualitative analysis was performed on the responses of subjects who left their laptops unattended after the warning banner was placed. The results showed a mix of convenience, and a blind trust on the safety of the faculty. In conclusion, a simple banner was effective in reducing the opportunity for laptop theft. However, the percentage of laptops left unattended was still high even after the introduction of the banner.
0909.4642
Marc Scherfenberg
Christian Knauer, Maarten L\"offler, Marc Scherfenberg, Thomas Wolle
The directed Hausdorff distance between imprecise point sets
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the directed Hausdorff distance between point sets in the plane, where one or both point sets consist of imprecise points. An imprecise point is modelled by a disc given by its centre and a radius. The actual position of an imprecise point may be anywhere within its disc. Due to the direction of the Hausdorff Distance and whether its tight upper or lower bound is computed there are several cases to consider. For every case we either show that the computation is NP-hard or we present an algorithm with a polynomial running time. Further we give several approximation algorithms for the hard cases and show that one of them cannot be approximated better than with factor 3, unless P=NP.
[ { "created": "Fri, 25 Sep 2009 15:25:25 GMT", "version": "v1" } ]
2009-09-30
[ [ "Knauer", "Christian", "" ], [ "Löffler", "Maarten", "" ], [ "Scherfenberg", "Marc", "" ], [ "Wolle", "Thomas", "" ] ]
We consider the directed Hausdorff distance between point sets in the plane, where one or both point sets consist of imprecise points. An imprecise point is modelled by a disc given by its centre and a radius. The actual position of an imprecise point may be anywhere within its disc. Due to the direction of the Hausdorff Distance and whether its tight upper or lower bound is computed there are several cases to consider. For every case we either show that the computation is NP-hard or we present an algorithm with a polynomial running time. Further we give several approximation algorithms for the hard cases and show that one of them cannot be approximated better than with factor 3, unless P=NP.
2104.06784
Yih-Chin Tai
Chi-Jyun Ko, Po-Chih Chen, Hock-Kiet Wong and Yih-Chin Tai
MoSES_2PDF: A GIS-Compatible GPU-accelerated High-Performance Simulation Tool for Grain-Fluid Shallow Flows
16 pages, 7 figures and 1 table
null
null
null
cs.CE physics.geo-ph
http://creativecommons.org/licenses/by/4.0/
We introduce a GPU-accelerated simulation tool, named Modeling on Shallow Flows with Efficient Simulation for Two-Phase Debris Flows (MoSES_2PDF), of which the input and output data can be linked to the GIS system for engineering application. MoSES_2PDF is developed based on the CUDA structure so that it can well run with different NVIDIA GPU cards, once the CUDA vers. 9.2 (or higher) is installed. The performance of the MoSES_2PDF is evaluated, and it is found that the present GPU-CUDA implementation can enhance efficiency by up to 230 folds, depending on the PC/workstations, models of GPU card, and the mesh numbers in the computation domain. Two numerical examples are illustrated with two distinct initial inflow conditions, which are included in two modes of MoSES_2PDF, respectively. In the numerical example of a large-scale event, the 2009 Hsiaolin event, the results computed by two distinct NVIDIA GPU cards (RTX-2080-Ti and Tesla-V100) are found to be identical but only tiny deviation is figured out in comparison with the results computed by the conventional single-core CPU-code. It is speculated to be caused by the different structures in the source codes and some float/double operations. In addition to the illustration in the GIS system, the computed results by MoSES\_2PDF can also be shown with animated 3D graphics in the ANSI-Platform, where the user can interact with 3D scenes. The feasibility, features, and facilities of MoSES\_2PDF are demonstrated with respect to the two numerical examples concerning two real events.
[ { "created": "Wed, 14 Apr 2021 11:19:39 GMT", "version": "v1" } ]
2021-04-15
[ [ "Ko", "Chi-Jyun", "" ], [ "Chen", "Po-Chih", "" ], [ "Wong", "Hock-Kiet", "" ], [ "Tai", "Yih-Chin", "" ] ]
We introduce a GPU-accelerated simulation tool, named Modeling on Shallow Flows with Efficient Simulation for Two-Phase Debris Flows (MoSES_2PDF), of which the input and output data can be linked to the GIS system for engineering application. MoSES_2PDF is developed based on the CUDA structure so that it can well run with different NVIDIA GPU cards, once the CUDA vers. 9.2 (or higher) is installed. The performance of the MoSES_2PDF is evaluated, and it is found that the present GPU-CUDA implementation can enhance efficiency by up to 230 folds, depending on the PC/workstations, models of GPU card, and the mesh numbers in the computation domain. Two numerical examples are illustrated with two distinct initial inflow conditions, which are included in two modes of MoSES_2PDF, respectively. In the numerical example of a large-scale event, the 2009 Hsiaolin event, the results computed by two distinct NVIDIA GPU cards (RTX-2080-Ti and Tesla-V100) are found to be identical but only tiny deviation is figured out in comparison with the results computed by the conventional single-core CPU-code. It is speculated to be caused by the different structures in the source codes and some float/double operations. In addition to the illustration in the GIS system, the computed results by MoSES\_2PDF can also be shown with animated 3D graphics in the ANSI-Platform, where the user can interact with 3D scenes. The feasibility, features, and facilities of MoSES\_2PDF are demonstrated with respect to the two numerical examples concerning two real events.
2310.16035
Jiayuan Mao
Joy Hsu, Jiayuan Mao, Joshua B. Tenenbaum, Jiajun Wu
What's Left? Concept Grounding with Logic-Enhanced Foundation Models
NeurIPS 2023. First two authors contributed equally. Project page: https://web.stanford.edu/~joycj/projects/left_neurips_2023
null
null
null
cs.CV cs.AI cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent works such as VisProg and ViperGPT have smartly composed foundation models for visual reasoning-using large language models (LLMs) to produce programs that can be executed by pre-trained vision-language models. However, they operate in limited domains, such as 2D images, not fully exploiting the generalization of language: abstract concepts like "left" can also be grounded in 3D, temporal, and action data, as in moving to your left. This limited generalization stems from these inference-only methods' inability to learn or adapt pre-trained models to a new domain. We propose the Logic-Enhanced Foundation Model (LEFT), a unified framework that learns to ground and reason with concepts across domains with a differentiable, domain-independent, first-order logic-based program executor. LEFT has an LLM interpreter that outputs a program represented in a general, logic-based reasoning language, which is shared across all domains and tasks. LEFT's executor then executes the program with trainable domain-specific grounding modules. We show that LEFT flexibly learns concepts in four domains: 2D images, 3D scenes, human motions, and robotic manipulation. It exhibits strong reasoning ability in a wide variety of tasks, including those that are complex and not seen during training, and can be easily applied to new domains.
[ { "created": "Tue, 24 Oct 2023 17:50:20 GMT", "version": "v1" } ]
2023-10-25
[ [ "Hsu", "Joy", "" ], [ "Mao", "Jiayuan", "" ], [ "Tenenbaum", "Joshua B.", "" ], [ "Wu", "Jiajun", "" ] ]
Recent works such as VisProg and ViperGPT have smartly composed foundation models for visual reasoning-using large language models (LLMs) to produce programs that can be executed by pre-trained vision-language models. However, they operate in limited domains, such as 2D images, not fully exploiting the generalization of language: abstract concepts like "left" can also be grounded in 3D, temporal, and action data, as in moving to your left. This limited generalization stems from these inference-only methods' inability to learn or adapt pre-trained models to a new domain. We propose the Logic-Enhanced Foundation Model (LEFT), a unified framework that learns to ground and reason with concepts across domains with a differentiable, domain-independent, first-order logic-based program executor. LEFT has an LLM interpreter that outputs a program represented in a general, logic-based reasoning language, which is shared across all domains and tasks. LEFT's executor then executes the program with trainable domain-specific grounding modules. We show that LEFT flexibly learns concepts in four domains: 2D images, 3D scenes, human motions, and robotic manipulation. It exhibits strong reasoning ability in a wide variety of tasks, including those that are complex and not seen during training, and can be easily applied to new domains.
1905.01011
Jakob Pfender
Jakob Pfender, Alvin Valera, Winston K. G. Seah
Content Delivery Latency of Caching Strategies for Information-Centric IoT
10 pages, 9 figures, journal paper
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-network caching is a central aspect of Information-Centric Networking (ICN). It enables the rapid distribution of content across the network, alleviating strain on content producers and reducing content delivery latencies. ICN has emerged as a promising candidate for use in the Internet of Things (IoT). However, IoT devices operate under severe constraints, most notably limited memory. This means that nodes cannot indiscriminately cache all content; instead, there is a need for a caching strategy that decides what content to cache. Furthermore, many applications in the IoT space are timesensitive; therefore, finding a caching strategy that minimises the latency between content request and delivery is desirable. In this paper, we evaluate a number of ICN caching strategies in regards to latency and hop count reduction using IoT devices in a physical testbed. We find that the topology of the network, and thus the routing algorithm used to generate forwarding information, has a significant impact on the performance of a given caching strategy. To the best of our knowledge, this is the first study that focuses on latency effects in ICN-IoT caching while using real IoT hardware, and the first to explicitly discuss the link between routing algorithm, network topology, and caching effects.
[ { "created": "Fri, 3 May 2019 02:50:08 GMT", "version": "v1" } ]
2019-05-06
[ [ "Pfender", "Jakob", "" ], [ "Valera", "Alvin", "" ], [ "Seah", "Winston K. G.", "" ] ]
In-network caching is a central aspect of Information-Centric Networking (ICN). It enables the rapid distribution of content across the network, alleviating strain on content producers and reducing content delivery latencies. ICN has emerged as a promising candidate for use in the Internet of Things (IoT). However, IoT devices operate under severe constraints, most notably limited memory. This means that nodes cannot indiscriminately cache all content; instead, there is a need for a caching strategy that decides what content to cache. Furthermore, many applications in the IoT space are timesensitive; therefore, finding a caching strategy that minimises the latency between content request and delivery is desirable. In this paper, we evaluate a number of ICN caching strategies in regards to latency and hop count reduction using IoT devices in a physical testbed. We find that the topology of the network, and thus the routing algorithm used to generate forwarding information, has a significant impact on the performance of a given caching strategy. To the best of our knowledge, this is the first study that focuses on latency effects in ICN-IoT caching while using real IoT hardware, and the first to explicitly discuss the link between routing algorithm, network topology, and caching effects.
1901.10610
Myung Seok Shim
Myung Seok Shim, Chenye Zhao, Yang Li, Xuchong Zhang, Wenrui Zhang, Peng Li
Robust Deep Multi-Modal Sensor Fusion using Fusion Weight Regularization and Target Learning
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensor fusion has wide applications in many domains including health care and autonomous systems. While the advent of deep learning has enabled promising multi-modal fusion of high-level features and end-to-end sensor fusion solutions, existing deep learning based sensor fusion techniques including deep gating architectures are not always resilient, leading to the issue of fusion weight inconsistency. We propose deep multi-modal sensor fusion architectures with enhanced robustness particularly under the presence of sensor failures. At the core of our gating architectures are fusion weight regularization and fusion target learning operating on auxiliary unimodal sensing networks appended to the main fusion model. The proposed regularized gating architectures outperform the existing deep learning architectures with and without gating under both clean and corrupted sensory inputs resulted from sensor failures. The demonstrated improvements are particularly pronounced when one or more multiple sensory modalities are corrupted.
[ { "created": "Tue, 29 Jan 2019 23:32:20 GMT", "version": "v1" }, { "created": "Thu, 25 Jun 2020 05:06:28 GMT", "version": "v2" }, { "created": "Thu, 22 Apr 2021 02:38:26 GMT", "version": "v3" } ]
2021-04-23
[ [ "Shim", "Myung Seok", "" ], [ "Zhao", "Chenye", "" ], [ "Li", "Yang", "" ], [ "Zhang", "Xuchong", "" ], [ "Zhang", "Wenrui", "" ], [ "Li", "Peng", "" ] ]
Sensor fusion has wide applications in many domains including health care and autonomous systems. While the advent of deep learning has enabled promising multi-modal fusion of high-level features and end-to-end sensor fusion solutions, existing deep learning based sensor fusion techniques including deep gating architectures are not always resilient, leading to the issue of fusion weight inconsistency. We propose deep multi-modal sensor fusion architectures with enhanced robustness particularly under the presence of sensor failures. At the core of our gating architectures are fusion weight regularization and fusion target learning operating on auxiliary unimodal sensing networks appended to the main fusion model. The proposed regularized gating architectures outperform the existing deep learning architectures with and without gating under both clean and corrupted sensory inputs resulted from sensor failures. The demonstrated improvements are particularly pronounced when one or more multiple sensory modalities are corrupted.
2207.05409
Chenxin Li
Chenxin Li, Mingbao Lin, Zhiyuan Ding, Nie Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Liujuan Cao
Knowledge Condensation Distillation
ECCV2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student. Existing methods focus on excavating the knowledge hints and transferring the whole knowledge to the student. However, the knowledge redundancy arises since the knowledge shows different values to the student at different learning stages. In this paper, we propose Knowledge Condensation Distillation (KCD). Specifically, the knowledge value on each sample is dynamically estimated, based on which an Expectation-Maximization (EM) framework is forged to iteratively condense a compact knowledge set from the teacher to guide the student learning. Our approach is easy to build on top of the off-the-shelf KD methods, with no extra training parameters and negligible computation overhead. Thus, it presents one new perspective for KD, in which the student that actively identifies teacher's knowledge in line with its aptitude can learn to learn more effectively and efficiently. Experiments on standard benchmarks manifest that the proposed KCD can well boost the performance of student model with even higher distillation efficiency. Code is available at https://github.com/dzy3/KCD.
[ { "created": "Tue, 12 Jul 2022 09:17:34 GMT", "version": "v1" } ]
2022-07-13
[ [ "Li", "Chenxin", "" ], [ "Lin", "Mingbao", "" ], [ "Ding", "Zhiyuan", "" ], [ "Lin", "Nie", "" ], [ "Zhuang", "Yihong", "" ], [ "Huang", "Yue", "" ], [ "Ding", "Xinghao", "" ], [ "Cao", "Liujuan", "" ] ]
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student. Existing methods focus on excavating the knowledge hints and transferring the whole knowledge to the student. However, the knowledge redundancy arises since the knowledge shows different values to the student at different learning stages. In this paper, we propose Knowledge Condensation Distillation (KCD). Specifically, the knowledge value on each sample is dynamically estimated, based on which an Expectation-Maximization (EM) framework is forged to iteratively condense a compact knowledge set from the teacher to guide the student learning. Our approach is easy to build on top of the off-the-shelf KD methods, with no extra training parameters and negligible computation overhead. Thus, it presents one new perspective for KD, in which the student that actively identifies teacher's knowledge in line with its aptitude can learn to learn more effectively and efficiently. Experiments on standard benchmarks manifest that the proposed KCD can well boost the performance of student model with even higher distillation efficiency. Code is available at https://github.com/dzy3/KCD.
2406.13046
Cristian Meo
Cristian Meo, Ksenia Sycheva, Anirudh Goyal, Justin Dauwels
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70% compared to the baseline methods.
[ { "created": "Tue, 18 Jun 2024 20:26:30 GMT", "version": "v1" }, { "created": "Tue, 9 Jul 2024 16:29:08 GMT", "version": "v2" } ]
2024-07-10
[ [ "Meo", "Cristian", "" ], [ "Sycheva", "Ksenia", "" ], [ "Goyal", "Anirudh", "" ], [ "Dauwels", "Justin", "" ] ]
It is a common practice in natural language processing to pre-train a single model on a general domain and then fine-tune it for downstream tasks. However, when it comes to Large Language Models, fine-tuning the entire model can be computationally expensive, resulting in very intensive energy consumption. As a result, several Parameter Efficient Fine-Tuning (PEFT) approaches were recently proposed. One of the most popular approaches is low-rank adaptation (LoRA), where the key insight is decomposing the update weights of the pre-trained model into two low-rank matrices. However, the proposed approaches either use the same rank value across all different weight matrices, which has been shown to be a sub-optimal choice, or do not use any quantization technique, one of the most important factors when it comes to a model's energy consumption. In this work, we propose Bayesian-LoRA which approaches low-rank adaptation and quantization from a Bayesian perspective by employing a prior distribution on both quantization levels and rank values. As a result, B-LoRA is able to fine-tune a pre-trained model on a specific downstream task, finding the optimal rank values and quantization levels for every low-rank matrix. We validate the proposed model by fine-tuning a pre-trained DeBERTaV3 on the GLUE benchmark. Moreover, we compare it to relevant baselines and present both qualitative and quantitative results, showing how the proposed approach is able to learn optimal-rank quantized matrices. B-LoRA performs on par with or better than the baselines while reducing the total number of bit operations by roughly 70% compared to the baseline methods.
2205.08301
Antonello Paolino
Tong Hui (1 and 2), Antonello Paolino (1 and 4), Gabriele Nava (1), Giuseppe L'Erario (1 and 3), Fabio Di Natale (1), Fabio Bergonti (1 and 3), Francesco Braghin (2) and Daniele Pucci (1 and 3) ((1) Istituto Italiano di Tecnologia, (2) Politecnico di Milano, (3) University of Manchester, (4) Universit\`a degli Studi di Napoli Federico II)
Centroidal Aerodynamic Modeling and Control of Flying Multibody Robots
7 pages, 6 figures, to be published in IEEE ICRA 2022. Presentation video: https://youtu.be/WDb-OVlh5XA
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a modeling and control framework for multibody flying robots subject to non-negligible aerodynamic forces acting on the centroidal dynamics. First, aerodynamic forces are calculated during robot flight in different operating conditions by means of Computational Fluid Dynamics (CFD) analysis. Then, analytical models of the aerodynamics coefficients are generated from the dataset collected with CFD analysis. The obtained simplified aerodynamic model is also used to improve the flying robot control design. We present two control strategies: compensating for the aerodynamic effects via feedback linearization and enforcing the controller robustness with gain-scheduling. Simulation results on the jet-powered humanoid robot iRonCub validate the proposed approach.
[ { "created": "Tue, 17 May 2022 12:58:18 GMT", "version": "v1" } ]
2022-05-18
[ [ "Hui", "Tong", "", "1 and 2" ], [ "Paolino", "Antonello", "", "1 and 4" ], [ "Nava", "Gabriele", "", "1 and 3" ], [ "L'Erario", "Giuseppe", "", "1 and 3" ], [ "Di Natale", "Fabio", "", "1 and 3" ], [ "Bergonti", "Fabio", "", "1 and 3" ], [ "Braghin", "Francesco", "", "1 and 3" ], [ "Pucci", "Daniele", "", "1 and 3" ] ]
This paper presents a modeling and control framework for multibody flying robots subject to non-negligible aerodynamic forces acting on the centroidal dynamics. First, aerodynamic forces are calculated during robot flight in different operating conditions by means of Computational Fluid Dynamics (CFD) analysis. Then, analytical models of the aerodynamics coefficients are generated from the dataset collected with CFD analysis. The obtained simplified aerodynamic model is also used to improve the flying robot control design. We present two control strategies: compensating for the aerodynamic effects via feedback linearization and enforcing the controller robustness with gain-scheduling. Simulation results on the jet-powered humanoid robot iRonCub validate the proposed approach.
2107.11174
Mohit Singhala
Mohit Singhala, Amy Chi, Maria Coleman and Jeremy D. Brown
Preliminary investigation into how limb choice affects kinesthetic perception
Accepted as Works-in-Progress paper to World Haptics 2019
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have a limited understanding of how we integrate haptic information in real-time from our upper limbs to perform complex bimanual tasks, an ability that humans routinely employ to perform tasks of varying levels of difficulty. In order to understand how information from both limbs is used to create a unified percept, it is important to study both the limbs separately first. Prevalent theories highlighting the role of central nervous system (CNS) in accounting for internal body dynamics seem to suggest that both upper limbs should be equally sensitive to external stimuli. However, there is empirical proof demonstrating a perceptual difference in our upper limbs for tasks like shape discrimination, prompting the need to study effects of limb choice on kinesthetic perception. In this manuscript, we start evaluating Just Noticeable Difference (JND) for stiffness for both forearms separately. Early results validate the need for a more thorough investigation of limb choice on kinesthetic perception.
[ { "created": "Thu, 22 Jul 2021 16:56:43 GMT", "version": "v1" } ]
2021-07-26
[ [ "Singhala", "Mohit", "" ], [ "Chi", "Amy", "" ], [ "Coleman", "Maria", "" ], [ "Brown", "Jeremy D.", "" ] ]
We have a limited understanding of how we integrate haptic information in real-time from our upper limbs to perform complex bimanual tasks, an ability that humans routinely employ to perform tasks of varying levels of difficulty. In order to understand how information from both limbs is used to create a unified percept, it is important to study both the limbs separately first. Prevalent theories highlighting the role of central nervous system (CNS) in accounting for internal body dynamics seem to suggest that both upper limbs should be equally sensitive to external stimuli. However, there is empirical proof demonstrating a perceptual difference in our upper limbs for tasks like shape discrimination, prompting the need to study effects of limb choice on kinesthetic perception. In this manuscript, we start evaluating Just Noticeable Difference (JND) for stiffness for both forearms separately. Early results validate the need for a more thorough investigation of limb choice on kinesthetic perception.
2312.11587
Diogo Luvizon
Diogo Luvizon and Vladislav Golyanik and Adam Kortylewski and Marc Habermann and Christian Theobalt
Relightable Neural Actor with Intrinsic Decomposition and Pose Control
Accepted to ECCV 2024. Project page: https://vcai.mpi-inf.mpg.de/projects/RNA/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creating a controllable and relightable digital avatar from multi-view video with fixed illumination is a very challenging problem since humans are highly articulated, creating pose-dependent appearance effects, and skin as well as clothing require space-varying BRDF modeling. Existing works on creating animatible avatars either do not focus on relighting at all, require controlled illumination setups, or try to recover a relightable avatar from very low cost setups, i.e. a single RGB video, at the cost of severely limited result quality, e.g. shadows not even being modeled. To address this, we propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted, allows appearance editing, and models pose-dependent effects such as wrinkles and self-shadows. Importantly, for training, our method solely requires a multi-view recording of the human under a known, but static lighting condition. To tackle this challenging problem, we leverage an implicit geometry representation of the actor with a drivable density field that models pose-dependent deformations and derive a dynamic mapping between 3D and UV spaces, where normal, visibility, and materials are effectively encoded. To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors, providing the first benchmark of its kind for human relighting, and demonstrating state-of-the-art relighting results for novel human poses.
[ { "created": "Mon, 18 Dec 2023 14:30:13 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2024 13:16:28 GMT", "version": "v2" } ]
2024-07-29
[ [ "Luvizon", "Diogo", "" ], [ "Golyanik", "Vladislav", "" ], [ "Kortylewski", "Adam", "" ], [ "Habermann", "Marc", "" ], [ "Theobalt", "Christian", "" ] ]
Creating a controllable and relightable digital avatar from multi-view video with fixed illumination is a very challenging problem since humans are highly articulated, creating pose-dependent appearance effects, and skin as well as clothing require space-varying BRDF modeling. Existing works on creating animatible avatars either do not focus on relighting at all, require controlled illumination setups, or try to recover a relightable avatar from very low cost setups, i.e. a single RGB video, at the cost of severely limited result quality, e.g. shadows not even being modeled. To address this, we propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted, allows appearance editing, and models pose-dependent effects such as wrinkles and self-shadows. Importantly, for training, our method solely requires a multi-view recording of the human under a known, but static lighting condition. To tackle this challenging problem, we leverage an implicit geometry representation of the actor with a drivable density field that models pose-dependent deformations and derive a dynamic mapping between 3D and UV spaces, where normal, visibility, and materials are effectively encoded. To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors, providing the first benchmark of its kind for human relighting, and demonstrating state-of-the-art relighting results for novel human poses.
2009.12401
Fergal Stapleton
Edgar Galv\'an and Fergal Stapleton
Semantic-based Distance Approaches in Multi-objective Genetic Programming
8 pages, 6 tables, added additional reference, updated citation format
null
null
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantics in the context of Genetic Program (GP) can be understood as the behaviour of a program given a set of inputs and has been well documented in improving performance of GP for a range of diverse problems. There have been a wide variety of different methods which have incorporated semantics into single-objective GP. The study of semantics in Multi-objective (MO) GP, however, has been limited and this paper aims at tackling this issue. More specifically, we conduct a comparison of three different forms of semantics in MOGP. One semantic-based method, (i) Semantic Similarity-based Crossover (SSC), is borrowed from single-objective GP, where the method has consistently being reported beneficial in evolutionary search. We also study two other methods, dubbed (ii) Semantic-based Distance as an additional criteriOn (SDO) and (iii) Pivot Similarity SDO. We empirically and consistently show how by naturally handling semantic distance as an additional criterion to be optimised in MOGP leads to better performance when compared to canonical methods and SSC. Both semantic distance based approaches made use of a pivot, which is a reference point from the sparsest region of the search space and it was found that individuals which were both semantically similar and dissimilar to this pivot were beneficial in promoting diversity. Moreover, we also show how the semantics successfully promoted in single-objective optimisation does not necessary lead to a better performance when adopted in MOGP.
[ { "created": "Fri, 25 Sep 2020 19:01:13 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2020 10:24:35 GMT", "version": "v2" }, { "created": "Sun, 4 Oct 2020 11:33:30 GMT", "version": "v3" }, { "created": "Wed, 16 Dec 2020 20:31:59 GMT", "version": "v4" } ]
2020-12-18
[ [ "Galván", "Edgar", "" ], [ "Stapleton", "Fergal", "" ] ]
Semantics in the context of Genetic Program (GP) can be understood as the behaviour of a program given a set of inputs and has been well documented in improving performance of GP for a range of diverse problems. There have been a wide variety of different methods which have incorporated semantics into single-objective GP. The study of semantics in Multi-objective (MO) GP, however, has been limited and this paper aims at tackling this issue. More specifically, we conduct a comparison of three different forms of semantics in MOGP. One semantic-based method, (i) Semantic Similarity-based Crossover (SSC), is borrowed from single-objective GP, where the method has consistently being reported beneficial in evolutionary search. We also study two other methods, dubbed (ii) Semantic-based Distance as an additional criteriOn (SDO) and (iii) Pivot Similarity SDO. We empirically and consistently show how by naturally handling semantic distance as an additional criterion to be optimised in MOGP leads to better performance when compared to canonical methods and SSC. Both semantic distance based approaches made use of a pivot, which is a reference point from the sparsest region of the search space and it was found that individuals which were both semantically similar and dissimilar to this pivot were beneficial in promoting diversity. Moreover, we also show how the semantics successfully promoted in single-objective optimisation does not necessary lead to a better performance when adopted in MOGP.
2201.05646
Biplav Srivastava
Biplav Srivastava, Tarmo Koppel, Sai Teja Paladi, Siva Likitha Valluru, Rohit Sharma, Owen Bond
ULTRA: A Data-driven Approach for Recommending Team Formation in Response to Proposal Calls
8 pages, Accepted to IEEE ICDM Workshop on AI for Nudging and Personalization (WAIN) 2022
null
null
null
cs.IR cs.AI cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce an emerging AI-based approach and prototype system for assisting team formation when researchers respond to calls for proposals from funding agencies. This is an instance of the general problem of building teams when demand opportunities come periodically and potential members may vary over time. The novelties of our approach are that we: (a) extract technical skills needed about researchers and calls from multiple data sources and normalize them using Natural Language Processing (NLP) techniques, (b) build a prototype solution based on matching and teaming based on constraints, (c) describe initial feedback about system from researchers at a University to deploy, and (d) create and publish a dataset that others can use.
[ { "created": "Thu, 13 Jan 2022 02:48:42 GMT", "version": "v1" }, { "created": "Mon, 28 Nov 2022 00:00:24 GMT", "version": "v2" } ]
2022-11-29
[ [ "Srivastava", "Biplav", "" ], [ "Koppel", "Tarmo", "" ], [ "Paladi", "Sai Teja", "" ], [ "Valluru", "Siva Likitha", "" ], [ "Sharma", "Rohit", "" ], [ "Bond", "Owen", "" ] ]
We introduce an emerging AI-based approach and prototype system for assisting team formation when researchers respond to calls for proposals from funding agencies. This is an instance of the general problem of building teams when demand opportunities come periodically and potential members may vary over time. The novelties of our approach are that we: (a) extract technical skills needed about researchers and calls from multiple data sources and normalize them using Natural Language Processing (NLP) techniques, (b) build a prototype solution based on matching and teaming based on constraints, (c) describe initial feedback about system from researchers at a University to deploy, and (d) create and publish a dataset that others can use.
1901.03097
Deepak Mishra
Deepak Mishra and Erik G. Larsson
Optimal Channel Estimation for Reciprocity-Based Backscattering with a Full-Duplex MIMO Reader
accepted for publication in IEEE Transactions on Signal Processing, 16 pages, 15 figures, 1 table
null
10.1109/TSP.2019.2893859
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Backscatter communication (BSC) technology can enable ubiquitous deployment of low-cost sustainable wireless devices. In this work we investigate the efficacy of a full-duplex multiple-input-multiple-output (MIMO) reader for enhancing the limited communication range of monostatic BSC systems. As this performance is strongly influenced by the channel estimation (CE) quality, we first derive a novel least-squares estimator for the forward and backward links between the reader and the tag, assuming that reciprocity holds and K orthogonal pilots are transmitted from the first K antennas of an N antenna reader. We also obtain the corresponding linear minimum-mean square-error estimate for the backscattered channel. After defining the transceiver design at the reader using these estimates, we jointly optimize the number of orthogonal pilots and energy allocation for the CE and information decoding phases to maximize the average backscattered signal-to-noise ratio (SNR) for efficiently decoding the tag's messages. The unimodality of this SNR in optimization variables along with a tight analytical approximation for the jointly global optimal design is also discoursed. Lastly, the selected numerical results validate the proposed analysis, present key insights into the optimal resource utilization at reader, and quantify the achievable gains over the benchmark schemes.
[ { "created": "Thu, 10 Jan 2019 10:56:12 GMT", "version": "v1" } ]
2019-03-27
[ [ "Mishra", "Deepak", "" ], [ "Larsson", "Erik G.", "" ] ]
Backscatter communication (BSC) technology can enable ubiquitous deployment of low-cost sustainable wireless devices. In this work we investigate the efficacy of a full-duplex multiple-input-multiple-output (MIMO) reader for enhancing the limited communication range of monostatic BSC systems. As this performance is strongly influenced by the channel estimation (CE) quality, we first derive a novel least-squares estimator for the forward and backward links between the reader and the tag, assuming that reciprocity holds and K orthogonal pilots are transmitted from the first K antennas of an N antenna reader. We also obtain the corresponding linear minimum-mean square-error estimate for the backscattered channel. After defining the transceiver design at the reader using these estimates, we jointly optimize the number of orthogonal pilots and energy allocation for the CE and information decoding phases to maximize the average backscattered signal-to-noise ratio (SNR) for efficiently decoding the tag's messages. The unimodality of this SNR in optimization variables along with a tight analytical approximation for the jointly global optimal design is also discoursed. Lastly, the selected numerical results validate the proposed analysis, present key insights into the optimal resource utilization at reader, and quantify the achievable gains over the benchmark schemes.
2110.11402
Harald Steck
Harald Steck and Dario Garcia Garcia
On the Regularization of Autoencoders
10 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
While much work has been devoted to understanding the implicit (and explicit) regularization of deep nonlinear networks in the supervised setting, this paper focuses on unsupervised learning, i.e., autoencoders are trained with the objective of reproducing the output from the input. We extend recent results [Jin et al. 2021] on unconstrained linear models and apply them to (1) nonlinear autoencoders and (2) constrained linear autoencoders, obtaining the following two results: first, we show that the unsupervised setting by itself induces strong additional regularization, i.e., a severe reduction in the model-capacity of the learned autoencoder: we derive that a deep nonlinear autoencoder cannot fit the training data more accurately than a linear autoencoder does if both models have the same dimensionality in their last hidden layer (and under a few additional assumptions). Our second contribution is concerned with the low-rank EDLAE model [Steck 2020], which is a linear autoencoder with a constraint on the diagonal of the learned low-rank parameter-matrix for improved generalization: we derive a closed-form approximation to the optimum of its non-convex training-objective, and empirically demonstrate that it is an accurate approximation across all model-ranks in our experiments on three well-known data sets.
[ { "created": "Thu, 21 Oct 2021 18:28:25 GMT", "version": "v1" } ]
2021-10-25
[ [ "Steck", "Harald", "" ], [ "Garcia", "Dario Garcia", "" ] ]
While much work has been devoted to understanding the implicit (and explicit) regularization of deep nonlinear networks in the supervised setting, this paper focuses on unsupervised learning, i.e., autoencoders are trained with the objective of reproducing the output from the input. We extend recent results [Jin et al. 2021] on unconstrained linear models and apply them to (1) nonlinear autoencoders and (2) constrained linear autoencoders, obtaining the following two results: first, we show that the unsupervised setting by itself induces strong additional regularization, i.e., a severe reduction in the model-capacity of the learned autoencoder: we derive that a deep nonlinear autoencoder cannot fit the training data more accurately than a linear autoencoder does if both models have the same dimensionality in their last hidden layer (and under a few additional assumptions). Our second contribution is concerned with the low-rank EDLAE model [Steck 2020], which is a linear autoencoder with a constraint on the diagonal of the learned low-rank parameter-matrix for improved generalization: we derive a closed-form approximation to the optimum of its non-convex training-objective, and empirically demonstrate that it is an accurate approximation across all model-ranks in our experiments on three well-known data sets.
2005.09117
Avner May
Simran Arora, Avner May, Jian Zhang, Christopher R\'e
Contextual Embeddings: When Are They Worth It?
ACL 2020
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline---random word embeddings---focusing on the impact of the training set size and the linguistic properties of the task. Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.
[ { "created": "Mon, 18 May 2020 22:20:17 GMT", "version": "v1" } ]
2020-05-20
[ [ "Arora", "Simran", "" ], [ "May", "Avner", "" ], [ "Zhang", "Jian", "" ], [ "Ré", "Christopher", "" ] ]
We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline---random word embeddings---focusing on the impact of the training set size and the linguistic properties of the task. Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.
2405.16656
Hellina Hailu Nigatu
Hellina Hailu Nigatu and Inioluwa Deborah Raji
"I Searched for a Religious Song in Amharic and Got Sexual Content Instead": Investigating Online Harm in Low-Resourced Languages on YouTube
To appear in ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) 2024
null
10.1145/3630106.3658546
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Online social media platforms such as YouTube have a wide, global reach. However, little is known about the experience of low-resourced language speakers on such platforms; especially in how they experience and navigate harmful content. To better understand this, we (1) conducted semi-structured interviews (n=15) and (2) analyzed search results (n=9313), recommendations (n=3336), channels (n=120) and comments (n=406) of policy-violating sexual content on YouTube focusing on the Amharic language. Our findings reveal that -- although Amharic-speaking YouTube users find the platform crucial for several aspects of their lives -- participants reported unplanned exposure to policy-violating sexual content when searching for benign, popular queries. Furthermore, malicious content creators seem to exploit under-performing language technologies and content moderation to further target vulnerable groups of speakers, including migrant domestic workers, diaspora, and local Ethiopians. Overall, our study sheds light on how failures in low-resourced language technology may lead to exposure to harmful content and suggests implications for stakeholders in minimizing harm. Content Warning: This paper includes discussions of NSFW topics and harmful content (hate, abuse, sexual harassment, self-harm, misinformation). The authors do not support the creation or distribution of harmful content.
[ { "created": "Sun, 26 May 2024 18:18:11 GMT", "version": "v1" } ]
2024-05-28
[ [ "Nigatu", "Hellina Hailu", "" ], [ "Raji", "Inioluwa Deborah", "" ] ]
Online social media platforms such as YouTube have a wide, global reach. However, little is known about the experience of low-resourced language speakers on such platforms; especially in how they experience and navigate harmful content. To better understand this, we (1) conducted semi-structured interviews (n=15) and (2) analyzed search results (n=9313), recommendations (n=3336), channels (n=120) and comments (n=406) of policy-violating sexual content on YouTube focusing on the Amharic language. Our findings reveal that -- although Amharic-speaking YouTube users find the platform crucial for several aspects of their lives -- participants reported unplanned exposure to policy-violating sexual content when searching for benign, popular queries. Furthermore, malicious content creators seem to exploit under-performing language technologies and content moderation to further target vulnerable groups of speakers, including migrant domestic workers, diaspora, and local Ethiopians. Overall, our study sheds light on how failures in low-resourced language technology may lead to exposure to harmful content and suggests implications for stakeholders in minimizing harm. Content Warning: This paper includes discussions of NSFW topics and harmful content (hate, abuse, sexual harassment, self-harm, misinformation). The authors do not support the creation or distribution of harmful content.
2406.05881
Utsav Singh
Utsav Singh, Pramit Bhattacharyya, Vinay P. Namboodiri
LGR2: Language Guided Reward Relabeling for Accelerating Hierarchical Reinforcement Learning
null
null
null
null
cs.LG cs.CL cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Developing interactive systems that leverage natural language instructions to solve complex robotic control tasks has been a long-desired goal in the robotics community. Large Language Models (LLMs) have demonstrated exceptional abilities in handling complex tasks, including logical reasoning, in-context learning, and code generation. However, predicting low-level robotic actions using LLMs poses significant challenges. Additionally, the complexity of such tasks usually demands the acquisition of policies to execute diverse subtasks and combine them to attain the ultimate objective. Hierarchical Reinforcement Learning (HRL) is an elegant approach for solving such tasks, which provides the intuitive benefits of temporal abstraction and improved exploration. However, HRL faces the recurring issue of non-stationarity due to unstable lower primitive behaviour. In this work, we propose LGR2, a novel HRL framework that leverages language instructions to generate a stationary reward function for the higher-level policy. Since the language-guided reward is unaffected by the lower primitive behaviour, LGR2 mitigates non-stationarity and is thus an elegant method for leveraging language instructions to solve robotic control tasks. To analyze the efficacy of our approach, we perform empirical analysis and demonstrate that LGR2 effectively alleviates non-stationarity in HRL. Our approach attains success rates exceeding 70$\%$ in challenging, sparse-reward robotic navigation and manipulation environments where the baselines fail to achieve any significant progress. Additionally, we conduct real-world robotic manipulation experiments and demonstrate that CRISP shows impressive generalization in real-world scenarios.
[ { "created": "Sun, 9 Jun 2024 18:40:24 GMT", "version": "v1" }, { "created": "Sun, 16 Jun 2024 10:28:45 GMT", "version": "v2" } ]
2024-06-18
[ [ "Singh", "Utsav", "" ], [ "Bhattacharyya", "Pramit", "" ], [ "Namboodiri", "Vinay P.", "" ] ]
Developing interactive systems that leverage natural language instructions to solve complex robotic control tasks has been a long-desired goal in the robotics community. Large Language Models (LLMs) have demonstrated exceptional abilities in handling complex tasks, including logical reasoning, in-context learning, and code generation. However, predicting low-level robotic actions using LLMs poses significant challenges. Additionally, the complexity of such tasks usually demands the acquisition of policies to execute diverse subtasks and combine them to attain the ultimate objective. Hierarchical Reinforcement Learning (HRL) is an elegant approach for solving such tasks, which provides the intuitive benefits of temporal abstraction and improved exploration. However, HRL faces the recurring issue of non-stationarity due to unstable lower primitive behaviour. In this work, we propose LGR2, a novel HRL framework that leverages language instructions to generate a stationary reward function for the higher-level policy. Since the language-guided reward is unaffected by the lower primitive behaviour, LGR2 mitigates non-stationarity and is thus an elegant method for leveraging language instructions to solve robotic control tasks. To analyze the efficacy of our approach, we perform empirical analysis and demonstrate that LGR2 effectively alleviates non-stationarity in HRL. Our approach attains success rates exceeding 70$\%$ in challenging, sparse-reward robotic navigation and manipulation environments where the baselines fail to achieve any significant progress. Additionally, we conduct real-world robotic manipulation experiments and demonstrate that CRISP shows impressive generalization in real-world scenarios.
1609.04493
Jia Pan
Yajue Yang and Yuanqing Wu and Jia Pan
Parallel Dynamics Computation using Prefix Sum Operations
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new parallel framework for fast computation of inverse and forward dynamics of articulated robots based on prefix sums (scans). We re-investigate the well-known recursive Newton-Euler formulation of robot dynamics and show that the forward-backward propagation process for robot inverse dynamics is equivalent to two scan operations on certain semigroups. We show that the state-of-the-art forward dynamics algorithms may almost completely be cast into a sequence of scan operations, with unscannable parts clearly identified. This suggests a serial-parallel hybrid approach for systems with a moderate number of links. We implement our scan based algorithms on Nvidia CUDA platform with performance compared with multithreading CPU-based recursive algorithms; a significant level of acceleration is demonstrated.
[ { "created": "Thu, 15 Sep 2016 02:11:16 GMT", "version": "v1" } ]
2016-09-16
[ [ "Yang", "Yajue", "" ], [ "Wu", "Yuanqing", "" ], [ "Pan", "Jia", "" ] ]
We propose a new parallel framework for fast computation of inverse and forward dynamics of articulated robots based on prefix sums (scans). We re-investigate the well-known recursive Newton-Euler formulation of robot dynamics and show that the forward-backward propagation process for robot inverse dynamics is equivalent to two scan operations on certain semigroups. We show that the state-of-the-art forward dynamics algorithms may almost completely be cast into a sequence of scan operations, with unscannable parts clearly identified. This suggests a serial-parallel hybrid approach for systems with a moderate number of links. We implement our scan based algorithms on Nvidia CUDA platform with performance compared with multithreading CPU-based recursive algorithms; a significant level of acceleration is demonstrated.
1201.3458
Jeffrey Yu
Di Wu, Yiping Ke, Jeffrey Xu Yu, Zheng Liu
Detecting Priming News Events
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a problem of detecting priming events based on a time series index and an evolving document stream. We define a priming event as an event which triggers abnormal movements of the time series index, i.e., the Iraq war with respect to the president approval index of President Bush. Existing solutions either focus on organizing coherent keywords from a document stream into events or identifying correlated movements between keyword frequency trajectories and the time series index. In this paper, we tackle the problem in two major steps. (1) We identify the elements that form a priming event. The element identified is called influential topic which consists of a set of coherent keywords. And we extract them by looking at the correlation between keyword trajectories and the interested time series index at a global level. (2) We extract priming events by detecting and organizing the bursty influential topics at a micro level. We evaluate our algorithms on a real-world dataset and the result confirms that our method is able to discover the priming events effectively.
[ { "created": "Tue, 17 Jan 2012 08:59:57 GMT", "version": "v1" } ]
2012-01-18
[ [ "Wu", "Di", "" ], [ "Ke", "Yiping", "" ], [ "Yu", "Jeffrey Xu", "" ], [ "Liu", "Zheng", "" ] ]
We study a problem of detecting priming events based on a time series index and an evolving document stream. We define a priming event as an event which triggers abnormal movements of the time series index, i.e., the Iraq war with respect to the president approval index of President Bush. Existing solutions either focus on organizing coherent keywords from a document stream into events or identifying correlated movements between keyword frequency trajectories and the time series index. In this paper, we tackle the problem in two major steps. (1) We identify the elements that form a priming event. The element identified is called influential topic which consists of a set of coherent keywords. And we extract them by looking at the correlation between keyword trajectories and the interested time series index at a global level. (2) We extract priming events by detecting and organizing the bursty influential topics at a micro level. We evaluate our algorithms on a real-world dataset and the result confirms that our method is able to discover the priming events effectively.
2310.06744
Wangbo Yu
Wangbo Yu, Li Yuan, Yan-Pei Cao, Xiangjun Gao, Xiaoyu Li, Wenbo Hu, Long Quan, Ying Shan, Yonghong Tian
HiFi-123: Towards High-fidelity One Image to 3D Content Generation
Accepted by ECCV 2024. Project Page: https://drexubery.github.io/HiFi-123/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in diffusion models have enabled 3D generation from a single image. However, current methods often produce suboptimal results for novel views, with blurred textures and deviations from the reference image, limiting their practical applications. In this paper, we introduce HiFi-123, a method designed for high-fidelity and multi-view consistent 3D generation. Our contributions are twofold: First, we propose a Reference-Guided Novel View Enhancement (RGNV) technique that significantly improves the fidelity of diffusion-based zero-shot novel view synthesis methods. Second, capitalizing on the RGNV, we present a novel Reference-Guided State Distillation (RGSD) loss. When incorporated into the optimization-based image-to-3D pipeline, our method significantly improves 3D generation quality, achieving state-of-the-art performance. Comprehensive evaluations demonstrate the effectiveness of our approach over existing methods, both qualitatively and quantitatively. Video results are available on the project page.
[ { "created": "Tue, 10 Oct 2023 16:14:20 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2024 11:35:55 GMT", "version": "v2" }, { "created": "Fri, 12 Jul 2024 01:55:26 GMT", "version": "v3" } ]
2024-07-15
[ [ "Yu", "Wangbo", "" ], [ "Yuan", "Li", "" ], [ "Cao", "Yan-Pei", "" ], [ "Gao", "Xiangjun", "" ], [ "Li", "Xiaoyu", "" ], [ "Hu", "Wenbo", "" ], [ "Quan", "Long", "" ], [ "Shan", "Ying", "" ], [ "Tian", "Yonghong", "" ] ]
Recent advances in diffusion models have enabled 3D generation from a single image. However, current methods often produce suboptimal results for novel views, with blurred textures and deviations from the reference image, limiting their practical applications. In this paper, we introduce HiFi-123, a method designed for high-fidelity and multi-view consistent 3D generation. Our contributions are twofold: First, we propose a Reference-Guided Novel View Enhancement (RGNV) technique that significantly improves the fidelity of diffusion-based zero-shot novel view synthesis methods. Second, capitalizing on the RGNV, we present a novel Reference-Guided State Distillation (RGSD) loss. When incorporated into the optimization-based image-to-3D pipeline, our method significantly improves 3D generation quality, achieving state-of-the-art performance. Comprehensive evaluations demonstrate the effectiveness of our approach over existing methods, both qualitatively and quantitatively. Video results are available on the project page.
1909.06623
Wen Yan
Wen Yan and Eduardo Corona and Dhairya Malhotra and Shravan Veerapaneni and Michael Shelley
A scalable computational platform for particulate Stokes suspensions
null
null
10.1016/j.jcp.2020.109524
null
cs.CE cond-mat.soft physics.comp-ph physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a computational framework for simulating suspensions of rigid particles in Newtonian Stokes flow. One central building block is a collision-resolution algorithm that overcomes the numerical constraints arising from particle collisions. This algorithm extends the well-known complementarity method for non-smooth multi-body dynamics to resolve collisions in dense rigid body suspensions. This approach formulates the collision resolution problem as a linear complementarity problem with geometric `non-overlapping' constraints imposed at each timestep. It is then reformulated as a constrained quadratic programming problem and the Barzilai-Borwein projected gradient descent method is applied for its solution. This framework is designed to be applicable for any convex particle shape, e.g., spheres and spherocylinders, and applicable to any Stokes mobility solver, including the Rotne-Prager-Yamakawa approximation, Stokesian Dynamics, and PDE solvers (e.g., boundary integral and immersed boundary methods). In particular, this method imposes Newton's Third Law and records the entire contact network. Further, we describe a fast, parallel, and spectrally-accurate boundary integral method tailored for spherical particles, capable of resolving lubrication effects. We show weak and strong parallel scalings up to $8\times 10^4$ particles with approximately $4\times 10^7$ degrees of freedom on $1792$ cores. We demonstrate the versatility of this framework with several examples, including sedimentation of particle clusters, and active matter systems composed of ensembles of particles driven to rotate.
[ { "created": "Sat, 14 Sep 2019 16:18:13 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 17:19:44 GMT", "version": "v2" } ]
2020-06-24
[ [ "Yan", "Wen", "" ], [ "Corona", "Eduardo", "" ], [ "Malhotra", "Dhairya", "" ], [ "Veerapaneni", "Shravan", "" ], [ "Shelley", "Michael", "" ] ]
We describe a computational framework for simulating suspensions of rigid particles in Newtonian Stokes flow. One central building block is a collision-resolution algorithm that overcomes the numerical constraints arising from particle collisions. This algorithm extends the well-known complementarity method for non-smooth multi-body dynamics to resolve collisions in dense rigid body suspensions. This approach formulates the collision resolution problem as a linear complementarity problem with geometric `non-overlapping' constraints imposed at each timestep. It is then reformulated as a constrained quadratic programming problem and the Barzilai-Borwein projected gradient descent method is applied for its solution. This framework is designed to be applicable for any convex particle shape, e.g., spheres and spherocylinders, and applicable to any Stokes mobility solver, including the Rotne-Prager-Yamakawa approximation, Stokesian Dynamics, and PDE solvers (e.g., boundary integral and immersed boundary methods). In particular, this method imposes Newton's Third Law and records the entire contact network. Further, we describe a fast, parallel, and spectrally-accurate boundary integral method tailored for spherical particles, capable of resolving lubrication effects. We show weak and strong parallel scalings up to $8\times 10^4$ particles with approximately $4\times 10^7$ degrees of freedom on $1792$ cores. We demonstrate the versatility of this framework with several examples, including sedimentation of particle clusters, and active matter systems composed of ensembles of particles driven to rotate.
2210.06812
Jonas Mueller
Hui Wen Goh, Ulyana Tkachenko, Jonas Mueller
CROWDLAB: Supervised learning to infer consensus labels and quality scores for data with multiple annotators
null
NeurIPS 2022 Human in the Loop Learning Workshop
null
null
cs.LG cs.HC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world data for classification is often labeled by multiple annotators. For analyzing such data, we introduce CROWDLAB, a straightforward approach to utilize any trained classifier to estimate: (1) A consensus label for each example that aggregates the available annotations; (2) A confidence score for how likely each consensus label is correct; (3) A rating for each annotator quantifying the overall correctness of their labels. Existing algorithms to estimate related quantities in crowdsourcing often rely on sophisticated generative models with iterative inference. CROWDLAB instead uses a straightforward weighted ensemble. Existing algorithms often rely solely on annotator statistics, ignoring the features of the examples from which the annotations derive. CROWDLAB utilizes any classifier model trained on these features, and can thus better generalize between examples with similar features. On real-world multi-annotator image data, our proposed method provides superior estimates for (1)-(3) than existing algorithms like Dawid-Skene/GLAD.
[ { "created": "Thu, 13 Oct 2022 07:54:07 GMT", "version": "v1" }, { "created": "Fri, 27 Jan 2023 18:53:11 GMT", "version": "v2" } ]
2023-01-30
[ [ "Goh", "Hui Wen", "" ], [ "Tkachenko", "Ulyana", "" ], [ "Mueller", "Jonas", "" ] ]
Real-world data for classification is often labeled by multiple annotators. For analyzing such data, we introduce CROWDLAB, a straightforward approach to utilize any trained classifier to estimate: (1) A consensus label for each example that aggregates the available annotations; (2) A confidence score for how likely each consensus label is correct; (3) A rating for each annotator quantifying the overall correctness of their labels. Existing algorithms to estimate related quantities in crowdsourcing often rely on sophisticated generative models with iterative inference. CROWDLAB instead uses a straightforward weighted ensemble. Existing algorithms often rely solely on annotator statistics, ignoring the features of the examples from which the annotations derive. CROWDLAB utilizes any classifier model trained on these features, and can thus better generalize between examples with similar features. On real-world multi-annotator image data, our proposed method provides superior estimates for (1)-(3) than existing algorithms like Dawid-Skene/GLAD.
2309.12442
DongHoon Kim
DongHoon Kim, Preston Bruner, Isaac Cho
Folding Rays: a Bimanual Occluded Target Interaction Technique
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
As Virtual Reality becomes commonplace in the world, it is important for developers to focus on user interaction with the virtual world. Currently, there are limitations to some selection and navigation techniques that have not yet been completely overcome. Focusing specifically on enhancing ray-casting, we present the advanced technique of folding rays which allows for the selection of occluded targets without any unnecessary physical navigation around a virtual environment. By improving upon current approaches, our technique allows for the selection of these targets without any manipulation of the virtual environment itself using rays that can bend at user-determined points. With their potential to be used in conjunction with teleportation as a virtual navigation technique, folding rays can be used in a variety of scenarios to enhance a user's interactive experience in virtual environments.
[ { "created": "Thu, 21 Sep 2023 19:23:55 GMT", "version": "v1" } ]
2023-09-25
[ [ "Kim", "DongHoon", "" ], [ "Bruner", "Preston", "" ], [ "Cho", "Isaac", "" ] ]
As Virtual Reality becomes commonplace in the world, it is important for developers to focus on user interaction with the virtual world. Currently, there are limitations to some selection and navigation techniques that have not yet been completely overcome. Focusing specifically on enhancing ray-casting, we present the advanced technique of folding rays which allows for the selection of occluded targets without any unnecessary physical navigation around a virtual environment. By improving upon current approaches, our technique allows for the selection of these targets without any manipulation of the virtual environment itself using rays that can bend at user-determined points. With their potential to be used in conjunction with teleportation as a virtual navigation technique, folding rays can be used in a variety of scenarios to enhance a user's interactive experience in virtual environments.
2104.07654
Nicha Dvornek
Nicha C. Dvornek, Xiaoxiao Li, Juntang Zhuang, Pamela Ventola, and James S. Duncan
Demographic-Guided Attention in Recurrent Neural Networks for Modeling Neuropathophysiological Heterogeneity
MLMI 2020 (MICCAI Workshop)
null
10.1007/978-3-030-59861-7_37
null
cs.LG cs.CV eess.IV q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneous presentation of a neurological disorder suggests potential differences in the underlying pathophysiological changes that occur in the brain. We propose to model heterogeneous patterns of functional network differences using a demographic-guided attention (DGA) mechanism for recurrent neural network models for prediction from functional magnetic resonance imaging (fMRI) time-series data. The context computed from the DGA head is used to help focus on the appropriate functional networks based on individual demographic information. We demonstrate improved classification on 3 subsets of the ABIDE I dataset used in published studies that have previously produced state-of-the-art results, evaluating performance under a leave-one-site-out cross-validation framework for better generalizeability to new data. Finally, we provide examples of interpreting functional network differences based on individual demographic variables.
[ { "created": "Thu, 15 Apr 2021 17:58:36 GMT", "version": "v1" } ]
2021-04-16
[ [ "Dvornek", "Nicha C.", "" ], [ "Li", "Xiaoxiao", "" ], [ "Zhuang", "Juntang", "" ], [ "Ventola", "Pamela", "" ], [ "Duncan", "James S.", "" ] ]
Heterogeneous presentation of a neurological disorder suggests potential differences in the underlying pathophysiological changes that occur in the brain. We propose to model heterogeneous patterns of functional network differences using a demographic-guided attention (DGA) mechanism for recurrent neural network models for prediction from functional magnetic resonance imaging (fMRI) time-series data. The context computed from the DGA head is used to help focus on the appropriate functional networks based on individual demographic information. We demonstrate improved classification on 3 subsets of the ABIDE I dataset used in published studies that have previously produced state-of-the-art results, evaluating performance under a leave-one-site-out cross-validation framework for better generalizeability to new data. Finally, we provide examples of interpreting functional network differences based on individual demographic variables.
2405.02004
Yingshuang Zou
Yingshuang Zou, Yikang Ding, Xi Qiu, Haoqian Wang, Haotian Zhang
M${^2}$Depth: Self-supervised Two-Frame Multi-camera Metric Depth Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel self-supervised two-frame multi-camera metric depth estimation network, termed M${^2}$Depth, which is designed to predict reliable scale-aware surrounding depth in autonomous driving. Unlike the previous works that use multi-view images from a single time-step or multiple time-step images from a single camera, M${^2}$Depth takes temporally adjacent two-frame images from multiple cameras as inputs and produces high-quality surrounding depth. We first construct cost volumes in spatial and temporal domains individually and propose a spatial-temporal fusion module that integrates the spatial-temporal information to yield a strong volume presentation. We additionally combine the neural prior from SAM features with internal features to reduce the ambiguity between foreground and background and strengthen the depth edges. Extensive experimental results on nuScenes and DDAD benchmarks show M${^2}$Depth achieves state-of-the-art performance. More results can be found in https://heiheishuang.xyz/M2Depth .
[ { "created": "Fri, 3 May 2024 11:06:37 GMT", "version": "v1" } ]
2024-05-06
[ [ "Zou", "Yingshuang", "" ], [ "Ding", "Yikang", "" ], [ "Qiu", "Xi", "" ], [ "Wang", "Haoqian", "" ], [ "Zhang", "Haotian", "" ] ]
This paper presents a novel self-supervised two-frame multi-camera metric depth estimation network, termed M${^2}$Depth, which is designed to predict reliable scale-aware surrounding depth in autonomous driving. Unlike the previous works that use multi-view images from a single time-step or multiple time-step images from a single camera, M${^2}$Depth takes temporally adjacent two-frame images from multiple cameras as inputs and produces high-quality surrounding depth. We first construct cost volumes in spatial and temporal domains individually and propose a spatial-temporal fusion module that integrates the spatial-temporal information to yield a strong volume presentation. We additionally combine the neural prior from SAM features with internal features to reduce the ambiguity between foreground and background and strengthen the depth edges. Extensive experimental results on nuScenes and DDAD benchmarks show M${^2}$Depth achieves state-of-the-art performance. More results can be found in https://heiheishuang.xyz/M2Depth .
2110.06361
Shihao Ju
Shihao Ju and Theodore S. Rappaport
Sub-Terahertz Spatial Statistical MIMO Channel Model for Urban Microcells at 142 GHz
6 pages, 7 figures, 2021 IEEE Global Communications Conference
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Sixth generation (6G) cellular systems are expected to extend the operational range to sub-Terahertz (THz) frequencies between 100 and 300 GHz due to the broad unexploited spectrum therein. A proper channel model is needed to accurately describe spatial and temporal channel characteristics and faithfully create channel impulse responses at sub-THz frequencies. This paper studies the channel spatial statistics such as the number of spatial clusters and cluster power distribution based on recent radio propagation measurements conducted at 142 GHz in an urban microcell (UMi) scenario. For the 28 measured locations, we observe one to four spatial clusters at most locations. A detailed spatial statistical multiple input multiple output (MIMO) channel generation procedure is introduced based on the derived empirical channel statistics. We find that beamforming provides better spectral efficiency than spatial multiplexing in the LOS scenario due to the boresight path, and two spatial streams usually offer the highest spectral efficiency at most NLOS locations due to the limited number of spatial clusters.
[ { "created": "Tue, 12 Oct 2021 21:10:15 GMT", "version": "v1" } ]
2021-10-14
[ [ "Ju", "Shihao", "" ], [ "Rappaport", "Theodore S.", "" ] ]
Sixth generation (6G) cellular systems are expected to extend the operational range to sub-Terahertz (THz) frequencies between 100 and 300 GHz due to the broad unexploited spectrum therein. A proper channel model is needed to accurately describe spatial and temporal channel characteristics and faithfully create channel impulse responses at sub-THz frequencies. This paper studies the channel spatial statistics such as the number of spatial clusters and cluster power distribution based on recent radio propagation measurements conducted at 142 GHz in an urban microcell (UMi) scenario. For the 28 measured locations, we observe one to four spatial clusters at most locations. A detailed spatial statistical multiple input multiple output (MIMO) channel generation procedure is introduced based on the derived empirical channel statistics. We find that beamforming provides better spectral efficiency than spatial multiplexing in the LOS scenario due to the boresight path, and two spatial streams usually offer the highest spectral efficiency at most NLOS locations due to the limited number of spatial clusters.
2206.12139
Qi Liao
Qi Liao and Tianlun Hu and Nikolaj Marchenko and Peter Kulics and Lutz Ewe
HARU: Haptic Augmented Reality-Assisted User-Centric Industrial Network Planning
null
null
null
null
cs.NI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To support Industry 4.0 applications with haptics and human-machine interaction, 6G requires a new framework that is fully autonomous, visual, and interactive. In this paper, we provide an end-to-end solution, HARU, for private network planning services, especially industrial networks. The solution consists of the following functions: collecting visual and sensory data from the user device, reconstructing 3D radio propagation environment and conducting network planning on a server, and visualizing network performance with AR on the user device with enabled haptic feedback. The functions are empowered by three key technical components: 1) vision- and sensor fusion-based 3D environment reconstruction, 2) ray tracing-based radio map generation and network planning, and 3) AR-assisted network visualization enabled by real-time camera relocalization. We conducted the proof-of-concept in a Bosch plant in Germany and showed good network coverage of the optimized antenna location, as well as high accuracy in both environment reconstruction and camera relocalization. We also achieved real-time AR-supported network monitoring with an end-to-end latency of about $32$ ms per frame.
[ { "created": "Fri, 24 Jun 2022 08:02:48 GMT", "version": "v1" }, { "created": "Thu, 13 Oct 2022 14:18:07 GMT", "version": "v2" } ]
2022-10-14
[ [ "Liao", "Qi", "" ], [ "Hu", "Tianlun", "" ], [ "Marchenko", "Nikolaj", "" ], [ "Kulics", "Peter", "" ], [ "Ewe", "Lutz", "" ] ]
To support Industry 4.0 applications with haptics and human-machine interaction, 6G requires a new framework that is fully autonomous, visual, and interactive. In this paper, we provide an end-to-end solution, HARU, for private network planning services, especially industrial networks. The solution consists of the following functions: collecting visual and sensory data from the user device, reconstructing 3D radio propagation environment and conducting network planning on a server, and visualizing network performance with AR on the user device with enabled haptic feedback. The functions are empowered by three key technical components: 1) vision- and sensor fusion-based 3D environment reconstruction, 2) ray tracing-based radio map generation and network planning, and 3) AR-assisted network visualization enabled by real-time camera relocalization. We conducted the proof-of-concept in a Bosch plant in Germany and showed good network coverage of the optimized antenna location, as well as high accuracy in both environment reconstruction and camera relocalization. We also achieved real-time AR-supported network monitoring with an end-to-end latency of about $32$ ms per frame.
2305.17520
Uddeshya Upadhyay
Vikrant Rangnekar, Uddeshya Upadhyay, Zeynep Akata, Biplab Banerjee
USIM-DAL: Uncertainty-aware Statistical Image Modeling-based Dense Active Learning for Super-resolution
Accepted at UAI 2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called USIM-DAL that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed USIM-DAL and superiority over several dense regression active learning methods.
[ { "created": "Sat, 27 May 2023 16:33:43 GMT", "version": "v1" } ]
2023-05-30
[ [ "Rangnekar", "Vikrant", "" ], [ "Upadhyay", "Uddeshya", "" ], [ "Akata", "Zeynep", "" ], [ "Banerjee", "Biplab", "" ] ]
Dense regression is a widely used approach in computer vision for tasks such as image super-resolution, enhancement, depth estimation, etc. However, the high cost of annotation and labeling makes it challenging to achieve accurate results. We propose incorporating active learning into dense regression models to address this problem. Active learning allows models to select the most informative samples for labeling, reducing the overall annotation cost while improving performance. Despite its potential, active learning has not been widely explored in high-dimensional computer vision regression tasks like super-resolution. We address this research gap and propose a new framework called USIM-DAL that leverages the statistical properties of colour images to learn informative priors using probabilistic deep neural networks that model the heteroscedastic predictive distribution allowing uncertainty quantification. Moreover, the aleatoric uncertainty from the network serves as a proxy for error that is used for active learning. Our experiments on a wide variety of datasets spanning applications in natural images (visual genome, BSD100), medical imaging (histopathology slides), and remote sensing (satellite images) demonstrate the efficacy of the newly proposed USIM-DAL and superiority over several dense regression active learning methods.
1905.04709
Gang Min
Gang Min, Changqing Zhang, Xiongwei Zhang, Wei Tan
Deep Vocoder: Low Bit Rate Compression of Speech with Deep Autoencoder
null
null
null
null
cs.MM cs.IT cs.SD eess.AS math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the success of deep neural networks (DNNs) in speech processing, this paper presents Deep Vocoder, a direct end-to-end low bit rate speech compression method with deep autoencoder (DAE). In Deep Vocoder, DAE is used for extracting the latent representing features (LRFs) of speech, which are then efficiently quantized by an analysis-by-synthesis vector quantization (AbS VQ) method. AbS VQ aims to minimize the perceptual spectral reconstruction distortion rather than the distortion of LRFs vector itself. Also, a suboptimal codebook searching technique is proposed to further reduce the computational complexity. Experimental results demonstrate that Deep Vocoder yields substantial improvements in terms of frequency-weighted segmental SNR, STOI and PESQ score when compared to the output of the conventional SQ- or VQ-based codec. The yielded PESQ score over the TIMIT corpus is 3.34 and 3.08 for speech coding at 2400 bit/s and 1200 bit/s, respectively.
[ { "created": "Sun, 12 May 2019 12:24:27 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 09:06:34 GMT", "version": "v2" } ]
2019-05-15
[ [ "Min", "Gang", "" ], [ "Zhang", "Changqing", "" ], [ "Zhang", "Xiongwei", "" ], [ "Tan", "Wei", "" ] ]
Inspired by the success of deep neural networks (DNNs) in speech processing, this paper presents Deep Vocoder, a direct end-to-end low bit rate speech compression method with deep autoencoder (DAE). In Deep Vocoder, DAE is used for extracting the latent representing features (LRFs) of speech, which are then efficiently quantized by an analysis-by-synthesis vector quantization (AbS VQ) method. AbS VQ aims to minimize the perceptual spectral reconstruction distortion rather than the distortion of LRFs vector itself. Also, a suboptimal codebook searching technique is proposed to further reduce the computational complexity. Experimental results demonstrate that Deep Vocoder yields substantial improvements in terms of frequency-weighted segmental SNR, STOI and PESQ score when compared to the output of the conventional SQ- or VQ-based codec. The yielded PESQ score over the TIMIT corpus is 3.34 and 3.08 for speech coding at 2400 bit/s and 1200 bit/s, respectively.
2406.08753
Ricardo Grando
Hiago Sodre, Sebastian Barcelona, Anthony Scirgalea, Brandon Macedo, Gabriel Sampson, Pablo Moraes, William Moraes, Victoria Saravia, Juan Deniz, Bruna Guterres, Andre Kelbouscas, Ricardo Grando
UruBots UAV -- Air Emergency Service Indoor Team Description Paper for FIRA 2024
Team Description Paper for the FIRA RoboWorld Cup 2024
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This document addresses the description of the corresponding "Urubots" Team for the 2024 Fira Air League, "Air Emergency Service (Indoor)." We introduce our team and an autonomous Unmanned Aerial Vehicle (UAV) that relies on computer vision for its flight control. This UAV has the capability to perform a wide variety of navigation tasks in indoor environments, without requiring the intervention of an external operator or any form of external processing, resulting in a significant decrease in workload and manual dependence. Additionally, our software has been designed to be compatible with the vehicle's structure and for its application to the competition circuit. In this paper, we detail additional aspects about the mechanical structure, software, and application to the FIRA competition.
[ { "created": "Thu, 13 Jun 2024 02:23:29 GMT", "version": "v1" } ]
2024-06-14
[ [ "Sodre", "Hiago", "" ], [ "Barcelona", "Sebastian", "" ], [ "Scirgalea", "Anthony", "" ], [ "Macedo", "Brandon", "" ], [ "Sampson", "Gabriel", "" ], [ "Moraes", "Pablo", "" ], [ "Moraes", "William", "" ], [ "Saravia", "Victoria", "" ], [ "Deniz", "Juan", "" ], [ "Guterres", "Bruna", "" ], [ "Kelbouscas", "Andre", "" ], [ "Grando", "Ricardo", "" ] ]
This document addresses the description of the corresponding "Urubots" Team for the 2024 Fira Air League, "Air Emergency Service (Indoor)." We introduce our team and an autonomous Unmanned Aerial Vehicle (UAV) that relies on computer vision for its flight control. This UAV has the capability to perform a wide variety of navigation tasks in indoor environments, without requiring the intervention of an external operator or any form of external processing, resulting in a significant decrease in workload and manual dependence. Additionally, our software has been designed to be compatible with the vehicle's structure and for its application to the competition circuit. In this paper, we detail additional aspects about the mechanical structure, software, and application to the FIRA competition.
1405.3352
Ziqiang Chen
F. Lu and Z. Chen
Newton-Type Iterative Solver for Multiple View $L2$ Triangulation
15 pages, 1 figure, 4 tables, 30 references, C++ source codes
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this note, we show that the L2 optimal solutions to most real multiple view L2 triangulation problems can be efficiently obtained by two-stage Newton-like iterative methods, while the difficulty of such problems mainly lies in how to verify the L2 optimality. Such a working two-stage bundle adjustment approach features: first, the algorithm is initialized by symmedian point triangulation, a multiple-view generalization of the mid-point method; second, a symbolic-numeric method is employed to compute derivatives accurately; third, globalizing strategy such as line search or trust region is smoothly applied to the underlying iteration which assures algorithm robustness in general cases. Numerical comparison with tfml method shows that the local minimizers obtained by the two-stage iterative bundle adjustment approach proposed here are also the L2 optimal solutions to all the calibrated data sets available online by the Oxford visual geometry group. Extensive numerical experiments indicate the bundle adjustment approach solves more than 99% the real triangulation problems optimally. An IEEE 754 double precision C++ implementation shows that it takes only about 0.205 second tocompute allthe 4983 points in the Oxford dinosaur data setvia Gauss-Newton iteration hybrid with a line search strategy on a computer with a 3.4GHz Intel i7 CPU.
[ { "created": "Wed, 14 May 2014 03:35:56 GMT", "version": "v1" }, { "created": "Mon, 30 Jun 2014 23:39:32 GMT", "version": "v2" } ]
2014-07-02
[ [ "Lu", "F.", "" ], [ "Chen", "Z.", "" ] ]
In this note, we show that the L2 optimal solutions to most real multiple view L2 triangulation problems can be efficiently obtained by two-stage Newton-like iterative methods, while the difficulty of such problems mainly lies in how to verify the L2 optimality. Such a working two-stage bundle adjustment approach features: first, the algorithm is initialized by symmedian point triangulation, a multiple-view generalization of the mid-point method; second, a symbolic-numeric method is employed to compute derivatives accurately; third, globalizing strategy such as line search or trust region is smoothly applied to the underlying iteration which assures algorithm robustness in general cases. Numerical comparison with tfml method shows that the local minimizers obtained by the two-stage iterative bundle adjustment approach proposed here are also the L2 optimal solutions to all the calibrated data sets available online by the Oxford visual geometry group. Extensive numerical experiments indicate the bundle adjustment approach solves more than 99% the real triangulation problems optimally. An IEEE 754 double precision C++ implementation shows that it takes only about 0.205 second tocompute allthe 4983 points in the Oxford dinosaur data setvia Gauss-Newton iteration hybrid with a line search strategy on a computer with a 3.4GHz Intel i7 CPU.
2106.14610
Henrique Ferraz de Arruda
Henrique Ferraz de Arruda, Luciano da Fontoura Costa
A keyword-driven approach to science
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
To a good extent, words can be understood as corresponding to patterns or categories that appeared in order to represent concepts and structures that are particularly important or useful in a given time and space. Words are characterized by not being completely general nor specific, in the sense that the same word can be instantiated or related to several different contexts, depending on specific situations. Indeed, the way in which words are instantiated and associated represents a particularly interesting aspect that can substantially help to better understand the context in which they are employed. Scientific words are no exception to that. In the present work, we approach the associations between a set of particularly relevant words in the sense of being not only frequently used in several areas, but also representing concepts that are currently related to some of the main standing challenges in science. More specifically, the study reported here takes into account the words "prediction", "model", "optimization", "complex", "entropy", "random", "deterministic", "pattern", and "database". In order to complement the analysis, we also obtain a network representing the relationship between the adopted areas. Many interesting results were found. First and foremost, several of the words were observed to have markedly distinct associations in different areas. Biology was found to be related to computer science, sharing associations with databases. Furthermore, for most of the cases, the words "complex", "model", and "prediction" were observed to have several strong associations.
[ { "created": "Mon, 31 May 2021 22:06:20 GMT", "version": "v1" }, { "created": "Mon, 19 Jul 2021 21:35:07 GMT", "version": "v2" } ]
2021-07-21
[ [ "de Arruda", "Henrique Ferraz", "" ], [ "Costa", "Luciano da Fontoura", "" ] ]
To a good extent, words can be understood as corresponding to patterns or categories that appeared in order to represent concepts and structures that are particularly important or useful in a given time and space. Words are characterized by not being completely general nor specific, in the sense that the same word can be instantiated or related to several different contexts, depending on specific situations. Indeed, the way in which words are instantiated and associated represents a particularly interesting aspect that can substantially help to better understand the context in which they are employed. Scientific words are no exception to that. In the present work, we approach the associations between a set of particularly relevant words in the sense of being not only frequently used in several areas, but also representing concepts that are currently related to some of the main standing challenges in science. More specifically, the study reported here takes into account the words "prediction", "model", "optimization", "complex", "entropy", "random", "deterministic", "pattern", and "database". In order to complement the analysis, we also obtain a network representing the relationship between the adopted areas. Many interesting results were found. First and foremost, several of the words were observed to have markedly distinct associations in different areas. Biology was found to be related to computer science, sharing associations with databases. Furthermore, for most of the cases, the words "complex", "model", and "prediction" were observed to have several strong associations.
2011.04246
Lun Quan
Lun Quan, Zhiwei Zhang, Xingguang Zhong, Chao Xu and Fei Gao
EVA-Planner: Environmental Adaptive Quadrotor Planning
IEEE International Conference on Robotics and Automation (ICRA 2021)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The quadrotor is popularly used in challenging environments due to its superior agility and flexibility. In these scenarios, trajectory planning plays a vital role in generating safe motions to avoid obstacles while ensuring flight smoothness. Although many works on quadrotor planning have been proposed, a research gap exists in incorporating self-adaptation into a planning framework to enable a drone to automatically fly slower in denser environments and increase its speed in a safer area. In this paper, we propose an environmental adaptive planner to adjust the flight aggressiveness effectively based on the obstacle distribution and quadrotor state. Firstly, we design an environmental adaptive safety aware method to assign the priority of the surrounding obstacles according to the environmental risk level and instantaneous motion tendency. Then, we apply it into a multi-layered model predictive contouring control (Multi-MPCC) framework to generate adaptive, safe, and dynamical feasible local trajectories. Extensive simulations and real-world experiments verify the efficiency and robustness of our planning framework. Benchmark comparison also shows superior performances of our method with another advanced environmental adaptive planning algorithm. Moreover, we release our planning framework as open-source ros-packages.
[ { "created": "Mon, 9 Nov 2020 08:32:25 GMT", "version": "v1" }, { "created": "Mon, 5 Jul 2021 13:02:23 GMT", "version": "v2" } ]
2021-07-06
[ [ "Quan", "Lun", "" ], [ "Zhang", "Zhiwei", "" ], [ "Zhong", "Xingguang", "" ], [ "Xu", "Chao", "" ], [ "Gao", "Fei", "" ] ]
The quadrotor is popularly used in challenging environments due to its superior agility and flexibility. In these scenarios, trajectory planning plays a vital role in generating safe motions to avoid obstacles while ensuring flight smoothness. Although many works on quadrotor planning have been proposed, a research gap exists in incorporating self-adaptation into a planning framework to enable a drone to automatically fly slower in denser environments and increase its speed in a safer area. In this paper, we propose an environmental adaptive planner to adjust the flight aggressiveness effectively based on the obstacle distribution and quadrotor state. Firstly, we design an environmental adaptive safety aware method to assign the priority of the surrounding obstacles according to the environmental risk level and instantaneous motion tendency. Then, we apply it into a multi-layered model predictive contouring control (Multi-MPCC) framework to generate adaptive, safe, and dynamical feasible local trajectories. Extensive simulations and real-world experiments verify the efficiency and robustness of our planning framework. Benchmark comparison also shows superior performances of our method with another advanced environmental adaptive planning algorithm. Moreover, we release our planning framework as open-source ros-packages.
2009.07489
Sufeng Duan
Sufeng Duan, Hai Zhao and Rui Wang
Graph-to-Sequence Neural Machine Translation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural machine translation (NMT) usually works in a seq2seq learning way by viewing either source or target sentence as a linear sequence of words, which can be regarded as a special case of graph, taking words in the sequence as nodes and relationships between words as edges. In the light of the current NMT models more or less capture graph information among the sequence in a latent way, we present a graph-to-sequence model facilitating explicit graph information capturing. In detail, we propose a graph-based SAN-based NMT model called Graph-Transformer by capturing information of subgraphs of different orders in every layers. Subgraphs are put into different groups according to their orders, and every group of subgraphs respectively reflect different levels of dependency between words. For fusing subgraph representations, we empirically explore three methods which weight different groups of subgraphs of different orders. Results of experiments on WMT14 English-German and IWSLT14 German-English show that our method can effectively boost the Transformer with an improvement of 1.1 BLEU points on WMT14 English-German dataset and 1.0 BLEU points on IWSLT14 German-English dataset.
[ { "created": "Wed, 16 Sep 2020 06:28:58 GMT", "version": "v1" } ]
2020-09-17
[ [ "Duan", "Sufeng", "" ], [ "Zhao", "Hai", "" ], [ "Wang", "Rui", "" ] ]
Neural machine translation (NMT) usually works in a seq2seq learning way by viewing either source or target sentence as a linear sequence of words, which can be regarded as a special case of graph, taking words in the sequence as nodes and relationships between words as edges. In the light of the current NMT models more or less capture graph information among the sequence in a latent way, we present a graph-to-sequence model facilitating explicit graph information capturing. In detail, we propose a graph-based SAN-based NMT model called Graph-Transformer by capturing information of subgraphs of different orders in every layers. Subgraphs are put into different groups according to their orders, and every group of subgraphs respectively reflect different levels of dependency between words. For fusing subgraph representations, we empirically explore three methods which weight different groups of subgraphs of different orders. Results of experiments on WMT14 English-German and IWSLT14 German-English show that our method can effectively boost the Transformer with an improvement of 1.1 BLEU points on WMT14 English-German dataset and 1.0 BLEU points on IWSLT14 German-English dataset.
2305.19398
Eric Heisler
Eric Heisler and Cheng-Hau Yang and Aadesh Deshmukh and Baskar Ganapathysubramanian and Hari Sundar
Generating Finite Element Codes combining Adaptive Octrees with Complex Geometries
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by-sa/4.0/
We present a high-level domain-specific language (DSL) interface to drive an adaptive incomplete $k$-d tree-based framework for finite element (FEM) solutions to PDEs. This DSL provides three key advances: (a) it abstracts out the complexity of implementing non-trivial FEM formulations, (b) it simplifies deploying these formulations on arbitrarily complicated and adaptively refined meshes, and (c) it exhibits good parallel performance. Taken together, the DSL interface allows end-users to rapidly and efficiently prototype new mathematical approaches, and deploy them on large clusters for solving practical problems. We illustrate this DSL by implementing a workflow for solving PDEs using the recently developed shifted boundary method (SBM). The SBM requires approximating relatively complicated integrals over boundary surfaces. Using a high-level DSL greatly simplifies this process and allows rapid exploration of variations. We demonstrate these tools on a variety of 2-D and 3-D configurations. With fewer than 20 lines of input, we can produce a parallel code that scales well to thousands of processes. This generated code is made accessible and readable to be easily modified and hand-tuned, making this tool useful even to experts with the target software.
[ { "created": "Tue, 30 May 2023 20:25:25 GMT", "version": "v1" } ]
2023-06-01
[ [ "Heisler", "Eric", "" ], [ "Yang", "Cheng-Hau", "" ], [ "Deshmukh", "Aadesh", "" ], [ "Ganapathysubramanian", "Baskar", "" ], [ "Sundar", "Hari", "" ] ]
We present a high-level domain-specific language (DSL) interface to drive an adaptive incomplete $k$-d tree-based framework for finite element (FEM) solutions to PDEs. This DSL provides three key advances: (a) it abstracts out the complexity of implementing non-trivial FEM formulations, (b) it simplifies deploying these formulations on arbitrarily complicated and adaptively refined meshes, and (c) it exhibits good parallel performance. Taken together, the DSL interface allows end-users to rapidly and efficiently prototype new mathematical approaches, and deploy them on large clusters for solving practical problems. We illustrate this DSL by implementing a workflow for solving PDEs using the recently developed shifted boundary method (SBM). The SBM requires approximating relatively complicated integrals over boundary surfaces. Using a high-level DSL greatly simplifies this process and allows rapid exploration of variations. We demonstrate these tools on a variety of 2-D and 3-D configurations. With fewer than 20 lines of input, we can produce a parallel code that scales well to thousands of processes. This generated code is made accessible and readable to be easily modified and hand-tuned, making this tool useful even to experts with the target software.
2311.10924
Theodore Pan
Slobodan Mitrovi\'c, Theodore Pan
Faster Streaming and Scalable Algorithms for Finding Directed Dense Subgraphs in Large Graphs
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Finding dense subgraphs is a fundamental algorithmic tool in data mining, community detection, and clustering. In this problem, one aims to find an induced subgraph whose edge-to-vertex ratio is maximized. We study the directed case of this question in the context of semi-streaming and massively parallel algorithms. In particular, we show that it is possible to find a $(2+\epsilon)$ approximation on randomized streams even in a single pass by using $O(n \cdot {\rm poly} \log n)$ memory on $n$-vertex graphs. Our result improves over prior works, which were designed for arbitrary-ordered streams: the algorithm by Bahmani et al. (VLDB 2012) which uses $O(\log n)$ passes, and the work by Esfandiari et al. (2015) which makes one pass but uses $O(n^{3/2})$ memory. Moreover, our techniques extend to the Massively Parallel Computation model yielding $O(1)$ rounds in the super-linear and $O(\sqrt{\log n})$ rounds in the nearly-linear memory regime. This constitutes a quadratic improvement over state-of-the-art bounds by Bahmani et al. (VLDB 2012 and WAW 2014), which require $O(\log n)$ rounds even in the super-linear memory regime. Finally, we empirically evaluate our single-pass semi-streaming algorithm on $6$ benchmarks and show that, even on non-randomly ordered streams, the quality of its output is essentially the same as that of Bahmani et al. (VLDB 2012) while it is $2$ times faster on large graphs.
[ { "created": "Sat, 18 Nov 2023 00:58:05 GMT", "version": "v1" } ]
2023-11-21
[ [ "Mitrović", "Slobodan", "" ], [ "Pan", "Theodore", "" ] ]
Finding dense subgraphs is a fundamental algorithmic tool in data mining, community detection, and clustering. In this problem, one aims to find an induced subgraph whose edge-to-vertex ratio is maximized. We study the directed case of this question in the context of semi-streaming and massively parallel algorithms. In particular, we show that it is possible to find a $(2+\epsilon)$ approximation on randomized streams even in a single pass by using $O(n \cdot {\rm poly} \log n)$ memory on $n$-vertex graphs. Our result improves over prior works, which were designed for arbitrary-ordered streams: the algorithm by Bahmani et al. (VLDB 2012) which uses $O(\log n)$ passes, and the work by Esfandiari et al. (2015) which makes one pass but uses $O(n^{3/2})$ memory. Moreover, our techniques extend to the Massively Parallel Computation model yielding $O(1)$ rounds in the super-linear and $O(\sqrt{\log n})$ rounds in the nearly-linear memory regime. This constitutes a quadratic improvement over state-of-the-art bounds by Bahmani et al. (VLDB 2012 and WAW 2014), which require $O(\log n)$ rounds even in the super-linear memory regime. Finally, we empirically evaluate our single-pass semi-streaming algorithm on $6$ benchmarks and show that, even on non-randomly ordered streams, the quality of its output is essentially the same as that of Bahmani et al. (VLDB 2012) while it is $2$ times faster on large graphs.
2106.04555
Tommi Kerola
Tommi Kerola, Jie Li, Atsushi Kanehira, Yasunori Kudo, Alexis Vallet, Adrien Gaidon
Hierarchical Lov\'asz Embeddings for Proposal-free Panoptic Segmentation
13 pages, 9 figures, including supplementary material. To be published in CVPR 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Panoptic segmentation brings together two separate tasks: instance and semantic segmentation. Although they are related, unifying them faces an apparent paradox: how to learn simultaneously instance-specific and category-specific (i.e. instance-agnostic) representations jointly. Hence, state-of-the-art panoptic segmentation methods use complex models with a distinct stream for each task. In contrast, we propose Hierarchical Lov\'asz Embeddings, per pixel feature vectors that simultaneously encode instance- and category-level discriminative information. We use a hierarchical Lov\'asz hinge loss to learn a low-dimensional embedding space structured into a unified semantic and instance hierarchy without requiring separate network branches or object proposals. Besides modeling instances precisely in a proposal-free manner, our Hierarchical Lov\'asz Embeddings generalize to categories by using a simple Nearest-Class-Mean classifier, including for non-instance "stuff" classes where instance segmentation methods are not applicable. Our simple model achieves state-of-the-art results compared to existing proposal-free panoptic segmentation methods on Cityscapes, COCO, and Mapillary Vistas. Furthermore, our model demonstrates temporal stability between video frames.
[ { "created": "Tue, 8 Jun 2021 17:43:54 GMT", "version": "v1" } ]
2021-06-09
[ [ "Kerola", "Tommi", "" ], [ "Li", "Jie", "" ], [ "Kanehira", "Atsushi", "" ], [ "Kudo", "Yasunori", "" ], [ "Vallet", "Alexis", "" ], [ "Gaidon", "Adrien", "" ] ]
Panoptic segmentation brings together two separate tasks: instance and semantic segmentation. Although they are related, unifying them faces an apparent paradox: how to learn simultaneously instance-specific and category-specific (i.e. instance-agnostic) representations jointly. Hence, state-of-the-art panoptic segmentation methods use complex models with a distinct stream for each task. In contrast, we propose Hierarchical Lov\'asz Embeddings, per pixel feature vectors that simultaneously encode instance- and category-level discriminative information. We use a hierarchical Lov\'asz hinge loss to learn a low-dimensional embedding space structured into a unified semantic and instance hierarchy without requiring separate network branches or object proposals. Besides modeling instances precisely in a proposal-free manner, our Hierarchical Lov\'asz Embeddings generalize to categories by using a simple Nearest-Class-Mean classifier, including for non-instance "stuff" classes where instance segmentation methods are not applicable. Our simple model achieves state-of-the-art results compared to existing proposal-free panoptic segmentation methods on Cityscapes, COCO, and Mapillary Vistas. Furthermore, our model demonstrates temporal stability between video frames.
1004.0765
Secretary Aircc Journal
E.Sathiyamoorthy, N.Ch.Sriman Narayana Iyenger, V.Ramachandran (VIT University, India)
Agent Based Trust Management Model Based on Weight Value Model for Online Auctions
17Pages
International Journal of Network Security & Its Applications 1.3 (2009) 15-31
null
null
cs.GT cs.CR
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper is aimed at the stipulations which arise in the traditional online auctions as a result of various anomalies in the reputation and trust calculation mechanism. We try to improve the scalability and efficiency of the online auctions by providing efficient trust management methodology considering several factors into consideration. A comparison between the performance of the auctions system with and without the agent methodology is done with good results
[ { "created": "Tue, 6 Apr 2010 03:43:17 GMT", "version": "v1" } ]
2010-07-15
[ [ "Sathiyamoorthy", "E.", "", "VIT\n University, India" ], [ "Iyenger", "N. Ch. Sriman Narayana", "", "VIT\n University, India" ], [ "Ramachandran", "V.", "", "VIT\n University, India" ] ]
This paper is aimed at the stipulations which arise in the traditional online auctions as a result of various anomalies in the reputation and trust calculation mechanism. We try to improve the scalability and efficiency of the online auctions by providing efficient trust management methodology considering several factors into consideration. A comparison between the performance of the auctions system with and without the agent methodology is done with good results
1708.09129
Chien-Chun Ni
Xiaotian Yin, Yu-Yao Lin, Chien-Chun Ni, Jiaxin Ding, Wei Han, Dengpan Zhou, Jie Gao, Xianfeng Gu
Decentralized Trajectory Tracking Using Homology and Hodge Decomposition in Sensor Networks
30 pages, 10 figures, submitted to ACM TSAS
null
null
null
cs.NI cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the recent development of localization and tracking systems for both indoor and outdoor settings, we consider the problem of sensing, representing and analyzing human movement trajectories that we expect to gather in the near future. In this paper, we propose to use the topological representation, which records how a target moves around the natural obstacles in the underlying environment. We demonstrate that the topological information can be sufficiently descriptive for many applications and efficient enough for storing, comparing and classifying these natural human trajectories. We pre-process the sensor network with a purely decentralized algorithm such that certain edges are given numerical weights. Then we can perform trajectory classification by simply summing up the edge weights along the trajectory. Our method supports real-time classification of trajectories with minimum communication cost. We test the effectiveness of our approach by showing how to classify randomly generated trajectories in a multi-level arts museum layout as well as how to distinguish real world taxi trajectories in a large city.
[ { "created": "Wed, 30 Aug 2017 05:42:18 GMT", "version": "v1" } ]
2017-08-31
[ [ "Yin", "Xiaotian", "" ], [ "Lin", "Yu-Yao", "" ], [ "Ni", "Chien-Chun", "" ], [ "Ding", "Jiaxin", "" ], [ "Han", "Wei", "" ], [ "Zhou", "Dengpan", "" ], [ "Gao", "Jie", "" ], [ "Gu", "Xianfeng", "" ] ]
With the recent development of localization and tracking systems for both indoor and outdoor settings, we consider the problem of sensing, representing and analyzing human movement trajectories that we expect to gather in the near future. In this paper, we propose to use the topological representation, which records how a target moves around the natural obstacles in the underlying environment. We demonstrate that the topological information can be sufficiently descriptive for many applications and efficient enough for storing, comparing and classifying these natural human trajectories. We pre-process the sensor network with a purely decentralized algorithm such that certain edges are given numerical weights. Then we can perform trajectory classification by simply summing up the edge weights along the trajectory. Our method supports real-time classification of trajectories with minimum communication cost. We test the effectiveness of our approach by showing how to classify randomly generated trajectories in a multi-level arts museum layout as well as how to distinguish real world taxi trajectories in a large city.
2312.08009
Kewei Wang
Kewei Wang, Yizheng Wu, Zhiyu Pan, Xingyi Li, Ke Xian, Zhe Wang, Zhiguo Cao, Guosheng Lin
Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label Regeneration and BEVMix
This paper is accepted by AAAI2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Class-agnostic motion prediction methods aim to comprehend motion within open-world scenarios, holding significance for autonomous driving systems. However, training a high-performance model in a fully-supervised manner always requires substantial amounts of manually annotated data, which can be both expensive and time-consuming to obtain. To address this challenge, our study explores the potential of semi-supervised learning (SSL) for class-agnostic motion prediction. Our SSL framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data by generating pseudo labels through test-time inference. To improve the quality of pseudo labels, we propose a novel motion selection and re-generation module. This module effectively selects reliable pseudo labels and re-generates unreliable ones. Furthermore, we propose two data augmentation strategies: temporal sampling and BEVMix. These strategies facilitate consistency regularization in SSL. Experiments conducted on nuScenes demonstrate that our SSL method can surpass the self-supervised approach by a large margin by utilizing only a tiny fraction of labeled data. Furthermore, our method exhibits comparable performance to weakly and some fully supervised methods. These results highlight the ability of our method to strike a favorable balance between annotation costs and performance. Code will be available at https://github.com/kwwcv/SSMP.
[ { "created": "Wed, 13 Dec 2023 09:32:50 GMT", "version": "v1" }, { "created": "Thu, 14 Dec 2023 11:16:05 GMT", "version": "v2" } ]
2023-12-15
[ [ "Wang", "Kewei", "" ], [ "Wu", "Yizheng", "" ], [ "Pan", "Zhiyu", "" ], [ "Li", "Xingyi", "" ], [ "Xian", "Ke", "" ], [ "Wang", "Zhe", "" ], [ "Cao", "Zhiguo", "" ], [ "Lin", "Guosheng", "" ] ]
Class-agnostic motion prediction methods aim to comprehend motion within open-world scenarios, holding significance for autonomous driving systems. However, training a high-performance model in a fully-supervised manner always requires substantial amounts of manually annotated data, which can be both expensive and time-consuming to obtain. To address this challenge, our study explores the potential of semi-supervised learning (SSL) for class-agnostic motion prediction. Our SSL framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data by generating pseudo labels through test-time inference. To improve the quality of pseudo labels, we propose a novel motion selection and re-generation module. This module effectively selects reliable pseudo labels and re-generates unreliable ones. Furthermore, we propose two data augmentation strategies: temporal sampling and BEVMix. These strategies facilitate consistency regularization in SSL. Experiments conducted on nuScenes demonstrate that our SSL method can surpass the self-supervised approach by a large margin by utilizing only a tiny fraction of labeled data. Furthermore, our method exhibits comparable performance to weakly and some fully supervised methods. These results highlight the ability of our method to strike a favorable balance between annotation costs and performance. Code will be available at https://github.com/kwwcv/SSMP.
1708.00606
Zhiyong Chen
Xiao Yang, Zhiyong Chen, Kuikui Li, Yaping Sun and Hongming Zheng
Optimal Task Scheduling in Communication-Constrained Mobile Edge Computing Systems for Wireless Virtual Reality
submitted to APCC 2017
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile edge computing (MEC) is expected to be an effective solution to deliver 360-degree virtual reality (VR) videos over wireless networks. In contrast to previous computation-constrained MEC framework, which reduces the computation-resource consumption at the mobile VR device by increasing the communication-resource consumption, we develop a communications-constrained MEC framework to reduce communication-resource consumption by increasing the computation-resource consumption and exploiting the caching resources at the mobile VR device in this paper. Specifically, according to the task modularization, the MEC server can only deliver the components which have not been stored in the VR device, and then the VR device uses the received components and the corresponding cached components to construct the task, resulting in low communication-resource consumption but high delay. The MEC server can also compute the task by itself to reduce the delay, however, it consumes more communication-resource due to the delivery of entire task. Therefore, we then propose a task scheduling strategy to decide which computation model should the MEC server operates, in order to minimize the communication-resource consumption under the delay constraint. Finally, we discuss the tradeoffs between communications, computing, and caching in the proposed system.
[ { "created": "Wed, 2 Aug 2017 05:33:36 GMT", "version": "v1" } ]
2017-08-03
[ [ "Yang", "Xiao", "" ], [ "Chen", "Zhiyong", "" ], [ "Li", "Kuikui", "" ], [ "Sun", "Yaping", "" ], [ "Zheng", "Hongming", "" ] ]
Mobile edge computing (MEC) is expected to be an effective solution to deliver 360-degree virtual reality (VR) videos over wireless networks. In contrast to previous computation-constrained MEC framework, which reduces the computation-resource consumption at the mobile VR device by increasing the communication-resource consumption, we develop a communications-constrained MEC framework to reduce communication-resource consumption by increasing the computation-resource consumption and exploiting the caching resources at the mobile VR device in this paper. Specifically, according to the task modularization, the MEC server can only deliver the components which have not been stored in the VR device, and then the VR device uses the received components and the corresponding cached components to construct the task, resulting in low communication-resource consumption but high delay. The MEC server can also compute the task by itself to reduce the delay, however, it consumes more communication-resource due to the delivery of entire task. Therefore, we then propose a task scheduling strategy to decide which computation model should the MEC server operates, in order to minimize the communication-resource consumption under the delay constraint. Finally, we discuss the tradeoffs between communications, computing, and caching in the proposed system.
2102.05737
Xuan Lu
Xuan Lu, Wei Ai, Zhenpeng Chen, Yanbin Cao, Qiaozhu Mei
Emojis predict dropouts of remote workers: An empirical study of emoji usage on GitHub
null
PLOS ONE 17(2022):1-21
10.1371/journal.pone.0261262
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emotions at work have long been identified as critical signals of work motivations, status, and attitudes, and as predictors of various work-related outcomes. When more and more employees work remotely, these emotional signals of workers become harder to observe through daily, face-to-face communications. The use of online platforms to communicate and collaborate at work provides an alternative channel to monitor the emotions of workers. This paper studies how emojis, as non-verbal cues in online communications, can be used for such purposes and how the emotional signals in emoji usage can be used to predict future behavior of workers. In particular, we present how the developers on GitHub use emojis in their work-related activities. We show that developers have diverse patterns of emoji usage, which can be related to their working status including activity levels, types of work, types of communications, time management, and other behavioral patterns. Developers who use emojis in their posts are significantly less likely to dropout from the online work platform. Surprisingly, solely using emoji usage as features, standard machine learning models can predict future dropouts of developers at a satisfactory accuracy. Features related to the general use and the emotions of emojis appear to be important factors, while they do not rule out paths through other purposes of emoji use.
[ { "created": "Wed, 10 Feb 2021 20:59:43 GMT", "version": "v1" }, { "created": "Thu, 27 Jan 2022 17:22:11 GMT", "version": "v2" } ]
2022-01-28
[ [ "Lu", "Xuan", "" ], [ "Ai", "Wei", "" ], [ "Chen", "Zhenpeng", "" ], [ "Cao", "Yanbin", "" ], [ "Mei", "Qiaozhu", "" ] ]
Emotions at work have long been identified as critical signals of work motivations, status, and attitudes, and as predictors of various work-related outcomes. When more and more employees work remotely, these emotional signals of workers become harder to observe through daily, face-to-face communications. The use of online platforms to communicate and collaborate at work provides an alternative channel to monitor the emotions of workers. This paper studies how emojis, as non-verbal cues in online communications, can be used for such purposes and how the emotional signals in emoji usage can be used to predict future behavior of workers. In particular, we present how the developers on GitHub use emojis in their work-related activities. We show that developers have diverse patterns of emoji usage, which can be related to their working status including activity levels, types of work, types of communications, time management, and other behavioral patterns. Developers who use emojis in their posts are significantly less likely to dropout from the online work platform. Surprisingly, solely using emoji usage as features, standard machine learning models can predict future dropouts of developers at a satisfactory accuracy. Features related to the general use and the emotions of emojis appear to be important factors, while they do not rule out paths through other purposes of emoji use.
2205.02225
Shuliang Liu
Xuming Hu, Shuliang Liu, Chenwei Zhang, Shu`ang Li, Lijie Wen, Philip S. Yu
HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised Relation Extraction
In NAACL 2022 as a long paper. Code and data available at https://github.com/THU-BPM/HiURE
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution. Existing works either utilize self-supervised schemes to refine relational feature signals by iteratively leveraging adaptive clustering and classification that provoke gradual drift problems, or adopt instance-wise contrastive learning which unreasonably pushes apart those sentence pairs that are semantically similar. To overcome these defects, we propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention and effectively optimize relation representation of sentences under exemplar-wise contrastive learning. Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
[ { "created": "Wed, 4 May 2022 17:56:48 GMT", "version": "v1" }, { "created": "Thu, 5 May 2022 19:08:32 GMT", "version": "v2" }, { "created": "Mon, 20 Feb 2023 07:59:36 GMT", "version": "v3" } ]
2023-02-21
[ [ "Hu", "Xuming", "" ], [ "Liu", "Shuliang", "" ], [ "Zhang", "Chenwei", "" ], [ "Li", "Shu`ang", "" ], [ "Wen", "Lijie", "" ], [ "Yu", "Philip S.", "" ] ]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution. Existing works either utilize self-supervised schemes to refine relational feature signals by iteratively leveraging adaptive clustering and classification that provoke gradual drift problems, or adopt instance-wise contrastive learning which unreasonably pushes apart those sentence pairs that are semantically similar. To overcome these defects, we propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention and effectively optimize relation representation of sentences under exemplar-wise contrastive learning. Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
2107.05445
Yipeng Zhang
Yipeng Zhang, Tyler L. Hayes, Christopher Kanan
Disentangling Transfer and Interference in Multi-Domain Learning
AAAI 2022 PracticalDL Workshop
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans are incredibly good at transferring knowledge from one domain to another, enabling rapid learning of new tasks. Likewise, transfer learning has enabled enormous success in many computer vision problems using pretraining. However, the benefits of transfer in multi-domain learning, where a network learns multiple tasks defined by different datasets, has not been adequately studied. Learning multiple domains could be beneficial, or these domains could interfere with each other given limited network capacity. Understanding how deep neural networks of varied capacity facilitate transfer across inputs from different distributions is a critical step towards open world learning. In this work, we decipher the conditions where interference and knowledge transfer occur in multi-domain learning. We propose new metrics disentangling interference and transfer, set up experimental protocols, and examine the roles of network capacity, task grouping, and dynamic loss weighting in reducing interference and facilitating transfer.
[ { "created": "Fri, 2 Jul 2021 01:30:36 GMT", "version": "v1" }, { "created": "Fri, 16 Jul 2021 01:14:21 GMT", "version": "v2" }, { "created": "Thu, 16 Sep 2021 01:59:09 GMT", "version": "v3" }, { "created": "Fri, 14 Jan 2022 22:41:18 GMT", "version": "v4" } ]
2022-01-19
[ [ "Zhang", "Yipeng", "" ], [ "Hayes", "Tyler L.", "" ], [ "Kanan", "Christopher", "" ] ]
Humans are incredibly good at transferring knowledge from one domain to another, enabling rapid learning of new tasks. Likewise, transfer learning has enabled enormous success in many computer vision problems using pretraining. However, the benefits of transfer in multi-domain learning, where a network learns multiple tasks defined by different datasets, has not been adequately studied. Learning multiple domains could be beneficial, or these domains could interfere with each other given limited network capacity. Understanding how deep neural networks of varied capacity facilitate transfer across inputs from different distributions is a critical step towards open world learning. In this work, we decipher the conditions where interference and knowledge transfer occur in multi-domain learning. We propose new metrics disentangling interference and transfer, set up experimental protocols, and examine the roles of network capacity, task grouping, and dynamic loss weighting in reducing interference and facilitating transfer.
1907.00103
Matthew Streeter
Matthew Streeter
Learning Effective Loss Functions Efficiently
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of learning a loss function which, when minimized over a training dataset, yields a model that approximately minimizes a validation error metric. Though learning an optimal loss function is NP-hard, we present an anytime algorithm that is asymptotically optimal in the worst case, and is provably efficient in an idealized "easy" case. Experimentally, we show that this algorithm can be used to tune loss function hyperparameters orders of magnitude faster than state-of-the-art alternatives. We also show that our algorithm can be used to learn novel and effective loss functions on-the-fly during training.
[ { "created": "Fri, 28 Jun 2019 22:35:17 GMT", "version": "v1" } ]
2019-07-02
[ [ "Streeter", "Matthew", "" ] ]
We consider the problem of learning a loss function which, when minimized over a training dataset, yields a model that approximately minimizes a validation error metric. Though learning an optimal loss function is NP-hard, we present an anytime algorithm that is asymptotically optimal in the worst case, and is provably efficient in an idealized "easy" case. Experimentally, we show that this algorithm can be used to tune loss function hyperparameters orders of magnitude faster than state-of-the-art alternatives. We also show that our algorithm can be used to learn novel and effective loss functions on-the-fly during training.
2102.07974
Thiparat Chotibut
Jakub Bielawski, Thiparat Chotibut, Fryderyk Falniowski, Grzegorz Kosiorowski, Micha{\l} Misiurewicz, Georgios Piliouras
Follow-the-Regularized-Leader Routes to Chaos in Routing Games
30 pages, 8 figures
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:925-935, 2021
null
null
cs.GT cs.LG math.DS nlin.CD physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. We focus on the effects of increasing the population size or the scale of costs in congestion games, and generalize recent results on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to a much larger class of FoReL dynamics. We establish that, even in simple linear non-atomic congestion games with two parallel links and any fixed learning rate, unless the game is fully symmetric, increasing the population size or the scale of costs causes learning dynamics to become unstable and eventually chaotic, in the sense of Li-Yorke and positive topological entropy. Furthermore, we show the existence of novel non-standard phenomena such as the coexistence of stable Nash equilibria and chaos in the same game. We also observe the simultaneous creation of a chaotic attractor as another chaotic attractor gets destroyed. Lastly, although FoReL dynamics can be strange and non-equilibrating, we prove that the time average still converges to an exact equilibrium for any choice of learning rate and any scale of costs.
[ { "created": "Tue, 16 Feb 2021 06:40:31 GMT", "version": "v1" }, { "created": "Wed, 17 Feb 2021 05:38:45 GMT", "version": "v2" } ]
2022-01-28
[ [ "Bielawski", "Jakub", "" ], [ "Chotibut", "Thiparat", "" ], [ "Falniowski", "Fryderyk", "" ], [ "Kosiorowski", "Grzegorz", "" ], [ "Misiurewicz", "Michał", "" ], [ "Piliouras", "Georgios", "" ] ]
We study the emergence of chaotic behavior of Follow-the-Regularized Leader (FoReL) dynamics in games. We focus on the effects of increasing the population size or the scale of costs in congestion games, and generalize recent results on unstable, chaotic behaviors in the Multiplicative Weights Update dynamics to a much larger class of FoReL dynamics. We establish that, even in simple linear non-atomic congestion games with two parallel links and any fixed learning rate, unless the game is fully symmetric, increasing the population size or the scale of costs causes learning dynamics to become unstable and eventually chaotic, in the sense of Li-Yorke and positive topological entropy. Furthermore, we show the existence of novel non-standard phenomena such as the coexistence of stable Nash equilibria and chaos in the same game. We also observe the simultaneous creation of a chaotic attractor as another chaotic attractor gets destroyed. Lastly, although FoReL dynamics can be strange and non-equilibrating, we prove that the time average still converges to an exact equilibrium for any choice of learning rate and any scale of costs.
2002.09699
Rongfei Zeng
Rongfei Zeng, Shixun Zhang, Jiaqi Wang and Xiaowen Chu
FMore: An Incentive Scheme of Multi-dimensional Auction for Federated Learning in MEC
null
null
10.1109/ICDCS47774.2020.00094
null
cs.LG cs.GT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Promising federated learning coupled with Mobile Edge Computing (MEC) is considered as one of the most promising solutions to the AI-driven service provision. Plenty of studies focus on federated learning from the performance and security aspects, but they neglect the incentive mechanism. In MEC, edge nodes would not like to voluntarily participate in learning, and they differ in the provision of multi-dimensional resources, both of which might deteriorate the performance of federated learning. Also, lightweight schemes appeal to edge nodes in MEC. These features require the incentive mechanism to be well designed for MEC. In this paper, we present an incentive mechanism FMore with multi-dimensional procurement auction of K winners. Our proposal FMore not only is lightweight and incentive compatible, but also encourages more high-quality edge nodes with low cost to participate in learning and eventually improve the performance of federated learning. We also present theoretical results of Nash equilibrium strategy to edge nodes and employ the expected utility theory to provide guidance to the aggregator. Both extensive simulations and real-world experiments demonstrate that the proposed scheme can effectively reduce the training rounds and drastically improve the model accuracy for challenging AI tasks.
[ { "created": "Sat, 22 Feb 2020 13:43:36 GMT", "version": "v1" } ]
2021-06-29
[ [ "Zeng", "Rongfei", "" ], [ "Zhang", "Shixun", "" ], [ "Wang", "Jiaqi", "" ], [ "Chu", "Xiaowen", "" ] ]
Promising federated learning coupled with Mobile Edge Computing (MEC) is considered as one of the most promising solutions to the AI-driven service provision. Plenty of studies focus on federated learning from the performance and security aspects, but they neglect the incentive mechanism. In MEC, edge nodes would not like to voluntarily participate in learning, and they differ in the provision of multi-dimensional resources, both of which might deteriorate the performance of federated learning. Also, lightweight schemes appeal to edge nodes in MEC. These features require the incentive mechanism to be well designed for MEC. In this paper, we present an incentive mechanism FMore with multi-dimensional procurement auction of K winners. Our proposal FMore not only is lightweight and incentive compatible, but also encourages more high-quality edge nodes with low cost to participate in learning and eventually improve the performance of federated learning. We also present theoretical results of Nash equilibrium strategy to edge nodes and employ the expected utility theory to provide guidance to the aggregator. Both extensive simulations and real-world experiments demonstrate that the proposed scheme can effectively reduce the training rounds and drastically improve the model accuracy for challenging AI tasks.
2405.07006
Yu-Ying Chuang
Yu-Ying Chuang and Melanie J. Bell and Yu-Hsiang Tseng and R. Harald Baayen
Word-specific tonal realizations in Mandarin
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The pitch contours of Mandarin two-character words are generally understood as being shaped by the underlying tones of the constituent single-character words, in interaction with articulatory constraints imposed by factors such as speech rate, co-articulation with adjacent tones, segmental make-up, and predictability. This study shows that tonal realization is also partially determined by words' meanings. We first show, on the basis of a Taiwan corpus of spontaneous conversations, using the generalized additive regression model, and focusing on the rise-fall tone pattern, that after controlling for effects of speaker and context, word type is a stronger predictor of pitch realization than all the previously established word-form related predictors combined. Importantly, the addition of information about meaning in context improves prediction accuracy even further. We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 30% accuracy. These accuracies, which are an order of magnitude above chance level, suggest that the relation between words' pitch contours and their meanings are sufficiently strong to be functional for language users. The theoretical implications of these empirical findings are discussed.
[ { "created": "Sat, 11 May 2024 13:00:35 GMT", "version": "v1" } ]
2024-05-14
[ [ "Chuang", "Yu-Ying", "" ], [ "Bell", "Melanie J.", "" ], [ "Tseng", "Yu-Hsiang", "" ], [ "Baayen", "R. Harald", "" ] ]
The pitch contours of Mandarin two-character words are generally understood as being shaped by the underlying tones of the constituent single-character words, in interaction with articulatory constraints imposed by factors such as speech rate, co-articulation with adjacent tones, segmental make-up, and predictability. This study shows that tonal realization is also partially determined by words' meanings. We first show, on the basis of a Taiwan corpus of spontaneous conversations, using the generalized additive regression model, and focusing on the rise-fall tone pattern, that after controlling for effects of speaker and context, word type is a stronger predictor of pitch realization than all the previously established word-form related predictors combined. Importantly, the addition of information about meaning in context improves prediction accuracy even further. We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 30% accuracy. These accuracies, which are an order of magnitude above chance level, suggest that the relation between words' pitch contours and their meanings are sufficiently strong to be functional for language users. The theoretical implications of these empirical findings are discussed.
cs/0602089
Pascal Vontobel
Roxana Smarandache and Pascal O. Vontobel
Pseudo-Codeword Analysis of Tanner Graphs from Projective and Euclidean Planes
Submitted to IEEE Transactions on Information Theory, February 25, 2006
null
null
null
cs.IT cs.DM math.IT
null
In order to understand the performance of a code under maximum-likelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one's attention needs to be shifted to the pseudo-codewords, in particular to the minimal pseudo-codewords, and their pseudo-weights. In this paper we investigate some families of codes that have good properties under LP decoding, namely certain families of low-density parity-check (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudo-codewords and give lower bounds on their pseudo-weight.
[ { "created": "Sun, 26 Feb 2006 01:33:01 GMT", "version": "v1" } ]
2007-07-13
[ [ "Smarandache", "Roxana", "" ], [ "Vontobel", "Pascal O.", "" ] ]
In order to understand the performance of a code under maximum-likelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one's attention needs to be shifted to the pseudo-codewords, in particular to the minimal pseudo-codewords, and their pseudo-weights. In this paper we investigate some families of codes that have good properties under LP decoding, namely certain families of low-density parity-check (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudo-codewords and give lower bounds on their pseudo-weight.
2301.08518
Haksoo Lim
Haksoo Lim, Minjung Kim, Sewon Park, Noseong Park
Regular Time-series Generation using SGM
9 pages, appendix 3 pages, under review
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Score-based generative models (SGMs) are generative models that are in the spotlight these days. Time-series frequently occurs in our daily life, e.g., stock data, climate data, and so on. Especially, time-series forecasting and classification are popular research topics in the field of machine learning. SGMs are also known for outperforming other generative models. As a result, we apply SGMs to synthesize time-series data by learning conditional score functions. We propose a conditional score network for the time-series generation domain. Furthermore, we also derive the loss function between the score matching and the denoising score matching in the time-series generation domain. Finally, we achieve state-of-the-art results on real-world datasets in terms of sampling diversity and quality.
[ { "created": "Fri, 20 Jan 2023 11:34:12 GMT", "version": "v1" } ]
2023-01-23
[ [ "Lim", "Haksoo", "" ], [ "Kim", "Minjung", "" ], [ "Park", "Sewon", "" ], [ "Park", "Noseong", "" ] ]
Score-based generative models (SGMs) are generative models that are in the spotlight these days. Time-series frequently occurs in our daily life, e.g., stock data, climate data, and so on. Especially, time-series forecasting and classification are popular research topics in the field of machine learning. SGMs are also known for outperforming other generative models. As a result, we apply SGMs to synthesize time-series data by learning conditional score functions. We propose a conditional score network for the time-series generation domain. Furthermore, we also derive the loss function between the score matching and the denoising score matching in the time-series generation domain. Finally, we achieve state-of-the-art results on real-world datasets in terms of sampling diversity and quality.
1011.3717
Romain Couillet
Romain Couillet, Jakob Hoydis, and Merouane Debbah
Random Beamforming over Quasi-Static and Fading Channels: A Deterministic Equivalent Approach
to appear in IEEE Transactions on Information Theory, 2012
null
10.1109/TIT.2012.2201913
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study the performance of random isometric precoders over quasi-static and correlated fading channels. We derive deterministic approximations of the mutual information and the signal-to-interference-plus-noise ratio (SINR) at the output of the minimum-mean-square-error (MMSE) receiver and provide simple provably converging fixed-point algorithms for their computation. Although these approximations are only proven exact in the asymptotic regime with infinitely many antennas at the transmitters and receivers, simulations suggest that they closely match the performance of small-dimensional systems. We exemplarily apply our results to the performance analysis of multi-cellular communication systems, multiple-input multiple-output multiple-access channels (MIMO-MAC), and MIMO interference channels. The mathematical analysis is based on the Stieltjes transform method. This enables the derivation of deterministic equivalents of functionals of large-dimensional random matrices. In contrast to previous works, our analysis does not rely on arguments from free probability theory which enables the consideration of random matrix models for which asymptotic freeness does not hold. Thus, the results of this work are also a novel contribution to the field of random matrix theory and applicable to a wide spectrum of practical systems.
[ { "created": "Tue, 16 Nov 2010 14:59:24 GMT", "version": "v1" }, { "created": "Mon, 28 Nov 2011 18:37:23 GMT", "version": "v2" }, { "created": "Fri, 11 May 2012 09:22:25 GMT", "version": "v3" } ]
2016-11-18
[ [ "Couillet", "Romain", "" ], [ "Hoydis", "Jakob", "" ], [ "Debbah", "Merouane", "" ] ]
In this work, we study the performance of random isometric precoders over quasi-static and correlated fading channels. We derive deterministic approximations of the mutual information and the signal-to-interference-plus-noise ratio (SINR) at the output of the minimum-mean-square-error (MMSE) receiver and provide simple provably converging fixed-point algorithms for their computation. Although these approximations are only proven exact in the asymptotic regime with infinitely many antennas at the transmitters and receivers, simulations suggest that they closely match the performance of small-dimensional systems. We exemplarily apply our results to the performance analysis of multi-cellular communication systems, multiple-input multiple-output multiple-access channels (MIMO-MAC), and MIMO interference channels. The mathematical analysis is based on the Stieltjes transform method. This enables the derivation of deterministic equivalents of functionals of large-dimensional random matrices. In contrast to previous works, our analysis does not rely on arguments from free probability theory which enables the consideration of random matrix models for which asymptotic freeness does not hold. Thus, the results of this work are also a novel contribution to the field of random matrix theory and applicable to a wide spectrum of practical systems.
2109.00653
Billy Jin
Monika Henzinger, Billy Jin, Richard Peng, David P. Williamson
Cut-Toggling and Cycle-Toggling for Electrical Flow and Other p-Norm Flows
arXiv admin note: text overlap with arXiv:2010.16316
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of finding flows in undirected graphs so as to minimize the weighted $p$-norm of the flow for any $p > 1$. When $p=2$, the problem is that of finding an electrical flow, and its dual is equivalent to solving a Laplacian linear system. The case $p = \infty$ corresponds to finding a min-congestion flow, which is equivalent to max-flows. A typical algorithmic construction for such problems considers vertex potentials corresponding to the flow conservation constraints, and has two simple types of update steps: cycle toggling, which modifies the flow along a cycle, and cut toggling, which modifies all potentials on one side of a cut. Both types of steps are typically performed relative to a spanning tree $T$; then the cycle is a fundamental cycle of $T$, and the cut is a fundamental cut of $T$. In this paper, we show that these simple steps can be used to give a novel efficient implementation for the $p = 2$ case and to find near-optimal $p$-norm flows in a low number of iterations for all values of $p > 1$. Compared to known faster algorithms for these problems, our algorithms are simpler, more combinatorial, and also expose several underlying connections between these algorithms and dynamic graph data structures that have not been formalized previously.
[ { "created": "Thu, 2 Sep 2021 00:17:20 GMT", "version": "v1" } ]
2021-09-06
[ [ "Henzinger", "Monika", "" ], [ "Jin", "Billy", "" ], [ "Peng", "Richard", "" ], [ "Williamson", "David P.", "" ] ]
We study the problem of finding flows in undirected graphs so as to minimize the weighted $p$-norm of the flow for any $p > 1$. When $p=2$, the problem is that of finding an electrical flow, and its dual is equivalent to solving a Laplacian linear system. The case $p = \infty$ corresponds to finding a min-congestion flow, which is equivalent to max-flows. A typical algorithmic construction for such problems considers vertex potentials corresponding to the flow conservation constraints, and has two simple types of update steps: cycle toggling, which modifies the flow along a cycle, and cut toggling, which modifies all potentials on one side of a cut. Both types of steps are typically performed relative to a spanning tree $T$; then the cycle is a fundamental cycle of $T$, and the cut is a fundamental cut of $T$. In this paper, we show that these simple steps can be used to give a novel efficient implementation for the $p = 2$ case and to find near-optimal $p$-norm flows in a low number of iterations for all values of $p > 1$. Compared to known faster algorithms for these problems, our algorithms are simpler, more combinatorial, and also expose several underlying connections between these algorithms and dynamic graph data structures that have not been formalized previously.
2203.10256
Zhixian Yang
Zhixian Yang, Xiaojun Wan
Dependency-based Mixture Language Models
Accepted to ACL 2022 Main Conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various models have been proposed to incorporate knowledge of syntactic structures into neural language models. However, previous works have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN), which makes themselves unwieldy in practice to fit into other neural language models, such as Transformer and GPT-2. In this paper, we introduce the Dependency-based Mixture Language Models. In detail, we first train neural language models with a novel dependency modeling objective to learn the probability distribution of future dependent tokens given context. We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention. Extensive experiments and human evaluations show that our method can be easily and effectively applied to different neural language models while improving neural text generation on various tasks.
[ { "created": "Sat, 19 Mar 2022 06:28:30 GMT", "version": "v1" } ]
2022-03-22
[ [ "Yang", "Zhixian", "" ], [ "Wan", "Xiaojun", "" ] ]
Various models have been proposed to incorporate knowledge of syntactic structures into neural language models. However, previous works have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN), which makes themselves unwieldy in practice to fit into other neural language models, such as Transformer and GPT-2. In this paper, we introduce the Dependency-based Mixture Language Models. In detail, we first train neural language models with a novel dependency modeling objective to learn the probability distribution of future dependent tokens given context. We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention. Extensive experiments and human evaluations show that our method can be easily and effectively applied to different neural language models while improving neural text generation on various tasks.
1305.4163
Dmitry Namiot
Dmitry Namiot, Manfred Sneps-Sneppe
Local Messages for Smartphones
6 pages. Submitted to CFIC Coimbra 2013 The Conference on Future Internet Communications
null
null
null
cs.NI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a new model for local messaging based on the network proximity. We present a novelty mobile mashup which combines Wi-Fi proximity measurements with Cloud Messaging. Our mobile mashup combines passive monitoring for smart phones and cloud based messaging for mobile operational systems. Passive monitoring can determine the location of mobile subscribers (mobile phones, actually) without the active participation of mobile users. This paper describes how to combine the passive monitoring and notifications.
[ { "created": "Fri, 17 May 2013 19:23:47 GMT", "version": "v1" } ]
2013-05-20
[ [ "Namiot", "Dmitry", "" ], [ "Sneps-Sneppe", "Manfred", "" ] ]
This paper describes a new model for local messaging based on the network proximity. We present a novelty mobile mashup which combines Wi-Fi proximity measurements with Cloud Messaging. Our mobile mashup combines passive monitoring for smart phones and cloud based messaging for mobile operational systems. Passive monitoring can determine the location of mobile subscribers (mobile phones, actually) without the active participation of mobile users. This paper describes how to combine the passive monitoring and notifications.
2312.08378
Houcheng Su
Houcheng Su, Daixian Liu, Mengzhu Wang, Wei Wang
Singular Value Penalization and Semantic Data Augmentation for Fully Test-Time Adaptation
10 pages, 5 figures,aaai2024(score:5422)
null
null
null
cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
Fully test-time adaptation (FTTA) adapts a model that is trained on a source domain to a target domain during the testing phase, where the two domains follow different distributions and source data is unavailable during the training phase. Existing methods usually adopt entropy minimization to reduce the uncertainty of target prediction results, and improve the FTTA performance accordingly. However, they fail to ensure the diversity in target prediction results. Recent domain adaptation study has shown that maximizing the sum of singular values of prediction results can simultaneously enhance their confidence (discriminability) and diversity. However, during the training phase, larger singular values usually take up a dominant position in loss maximization. This results in the model being more inclined to enhance discriminability for easily distinguishable classes, and the improvement in diversity is insufficiently effective. Furthermore, the adaptation and prediction in FTTA only use data from the current batch, which may lead to the risk of overfitting. To address the aforementioned issues, we propose maximizing the sum of singular values while minimizing their variance. This enables the model's focus toward the smaller singular values, enhancing discriminability between more challenging classes and effectively increasing the diversity of prediction results. Moreover, we incorporate data from the previous batch to realize semantic data augmentation for the current batch, reducing the risk of overfitting. Extensive experiments on benchmark datasets show our proposed approach outperforms some compared state-of-the-art FTTA methods.
[ { "created": "Sun, 10 Dec 2023 01:08:56 GMT", "version": "v1" } ]
2023-12-15
[ [ "Su", "Houcheng", "" ], [ "Liu", "Daixian", "" ], [ "Wang", "Mengzhu", "" ], [ "Wang", "Wei", "" ] ]
Fully test-time adaptation (FTTA) adapts a model that is trained on a source domain to a target domain during the testing phase, where the two domains follow different distributions and source data is unavailable during the training phase. Existing methods usually adopt entropy minimization to reduce the uncertainty of target prediction results, and improve the FTTA performance accordingly. However, they fail to ensure the diversity in target prediction results. Recent domain adaptation study has shown that maximizing the sum of singular values of prediction results can simultaneously enhance their confidence (discriminability) and diversity. However, during the training phase, larger singular values usually take up a dominant position in loss maximization. This results in the model being more inclined to enhance discriminability for easily distinguishable classes, and the improvement in diversity is insufficiently effective. Furthermore, the adaptation and prediction in FTTA only use data from the current batch, which may lead to the risk of overfitting. To address the aforementioned issues, we propose maximizing the sum of singular values while minimizing their variance. This enables the model's focus toward the smaller singular values, enhancing discriminability between more challenging classes and effectively increasing the diversity of prediction results. Moreover, we incorporate data from the previous batch to realize semantic data augmentation for the current batch, reducing the risk of overfitting. Extensive experiments on benchmark datasets show our proposed approach outperforms some compared state-of-the-art FTTA methods.
2003.12590
Martin Grohe
Martin Grohe
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data
null
null
null
null
cs.LG cs.DB cs.DM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vector representations of graphs and relational structures, whether hand-crafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view. Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
[ { "created": "Fri, 27 Mar 2020 18:23:55 GMT", "version": "v1" } ]
2020-03-31
[ [ "Grohe", "Martin", "" ] ]
Vector representations of graphs and relational structures, whether hand-crafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view. Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
2407.11998
Pei Chen
Pei Chen, Heng Wang, Sainan Sun, Zhiyuan Chen, Zhenkun Liu, Shuhua Cao, Li Yang, Minghui Yang
Custom Cloth Creation and Virtual Try-on for Everyone
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This demo showcases a simple tool that utilizes AIGC technology, enabling both professional designers and regular users to easily customize clothing for their digital avatars. Customization options include changing clothing colors, textures, logos, and patterns. Compared with traditional 3D modeling processes, our approach significantly enhances efficiency and interactivity and reduces production costs.
[ { "created": "Fri, 14 Jun 2024 03:19:03 GMT", "version": "v1" } ]
2024-07-18
[ [ "Chen", "Pei", "" ], [ "Wang", "Heng", "" ], [ "Sun", "Sainan", "" ], [ "Chen", "Zhiyuan", "" ], [ "Liu", "Zhenkun", "" ], [ "Cao", "Shuhua", "" ], [ "Yang", "Li", "" ], [ "Yang", "Minghui", "" ] ]
This demo showcases a simple tool that utilizes AIGC technology, enabling both professional designers and regular users to easily customize clothing for their digital avatars. Customization options include changing clothing colors, textures, logos, and patterns. Compared with traditional 3D modeling processes, our approach significantly enhances efficiency and interactivity and reduces production costs.
2112.03003
Shakshi Sharma
Sabur Butt, Shakshi Sharma, Rajesh Sharma, Grigori Sidorov, Alexander Gelbukh
What goes on inside rumour and non-rumour tweets and their reactions: A Psycholinguistic Analyses
10 pages
null
null
null
cs.CL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the problem of rumours on online social media (OSM) has attracted lots of attention. Researchers have started investigating from two main directions. First is the descriptive analysis of rumours and secondly, proposing techniques to detect (or classify) rumours. In the descriptive line of works, where researchers have tried to analyse rumours using NLP approaches, there isnt much emphasis on psycho-linguistics analyses of social media text. These kinds of analyses on rumour case studies are vital for drawing meaningful conclusions to mitigate misinformation. For our analysis, we explored the PHEME9 rumour dataset (consisting of 9 events), including source tweets (both rumour and non-rumour categories) and response tweets. We compared the rumour and nonrumour source tweets and then their corresponding reply (response) tweets to understand how they differ linguistically for every incident. Furthermore, we also evaluated if these features can be used for classifying rumour vs. non-rumour tweets through machine learning models. To this end, we employed various classical and ensemble-based approaches. To filter out the highly discriminative psycholinguistic features, we explored the SHAP AI Explainability tool. To summarise, this research contributes by performing an in-depth psycholinguistic analysis of rumours related to various kinds of events.
[ { "created": "Tue, 9 Nov 2021 07:45:11 GMT", "version": "v1" } ]
2021-12-07
[ [ "Butt", "Sabur", "" ], [ "Sharma", "Shakshi", "" ], [ "Sharma", "Rajesh", "" ], [ "Sidorov", "Grigori", "" ], [ "Gelbukh", "Alexander", "" ] ]
In recent years, the problem of rumours on online social media (OSM) has attracted lots of attention. Researchers have started investigating from two main directions. First is the descriptive analysis of rumours and secondly, proposing techniques to detect (or classify) rumours. In the descriptive line of works, where researchers have tried to analyse rumours using NLP approaches, there isnt much emphasis on psycho-linguistics analyses of social media text. These kinds of analyses on rumour case studies are vital for drawing meaningful conclusions to mitigate misinformation. For our analysis, we explored the PHEME9 rumour dataset (consisting of 9 events), including source tweets (both rumour and non-rumour categories) and response tweets. We compared the rumour and nonrumour source tweets and then their corresponding reply (response) tweets to understand how they differ linguistically for every incident. Furthermore, we also evaluated if these features can be used for classifying rumour vs. non-rumour tweets through machine learning models. To this end, we employed various classical and ensemble-based approaches. To filter out the highly discriminative psycholinguistic features, we explored the SHAP AI Explainability tool. To summarise, this research contributes by performing an in-depth psycholinguistic analysis of rumours related to various kinds of events.
2108.01374
Hsiao-Tzu Hung
Hsiao-Tzu Hung and Joann Ching and Seungheon Doh and Nabin Kim and Juhan Nam and Yi-Hsuan Yang
EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation
The paper has been accepted for publication at ISMIR 2021
null
null
null
cs.SD cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
While there are many music datasets with emotion labels in the literature, they cannot be used for research on symbolic-domain music analysis or generation, as there are usually audio files only. In this paper, we present the EMOPIA (pronounced `yee-m\`{o}-pi-uh') dataset, a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. Since the clips are not restricted to one clip per song, they can also be used for song-level analysis. We present the methodology for building the dataset, covering the song list curation, clip selection, and emotion annotation processes. Moreover, we prototype use cases on clip-level music emotion classification and emotion-based symbolic music generation by training and evaluating corresponding models using the dataset. The result demonstrates the potential of EMOPIA for being used in future exploration on piano emotion-related MIR tasks.
[ { "created": "Tue, 3 Aug 2021 08:59:26 GMT", "version": "v1" } ]
2021-08-04
[ [ "Hung", "Hsiao-Tzu", "" ], [ "Ching", "Joann", "" ], [ "Doh", "Seungheon", "" ], [ "Kim", "Nabin", "" ], [ "Nam", "Juhan", "" ], [ "Yang", "Yi-Hsuan", "" ] ]
While there are many music datasets with emotion labels in the literature, they cannot be used for research on symbolic-domain music analysis or generation, as there are usually audio files only. In this paper, we present the EMOPIA (pronounced `yee-m\`{o}-pi-uh') dataset, a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. Since the clips are not restricted to one clip per song, they can also be used for song-level analysis. We present the methodology for building the dataset, covering the song list curation, clip selection, and emotion annotation processes. Moreover, we prototype use cases on clip-level music emotion classification and emotion-based symbolic music generation by training and evaluating corresponding models using the dataset. The result demonstrates the potential of EMOPIA for being used in future exploration on piano emotion-related MIR tasks.
1608.03990
Anand Pratap Singh
Anand Pratap Singh and Shivaji Medida and Karthik Duraisamy
Machine Learning-augmented Predictive Modeling of Turbulent Separated Flows over Airfoils
null
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A modeling paradigm is developed to augment predictive models of turbulence by effectively utilizing limited data generated from physical experiments. The key components of our approach involve inverse modeling to infer the spatial distribution of model discrepancies, and, machine learning to reconstruct discrepancy information from a large number of inverse problems into corrective model forms. We apply the methodology to turbulent flows over airfoils involving flow separation. Model augmentations are developed for the Spalart Allmaras (SA) model using adjoint-based full field inference on experimentally measured lift coefficient data. When these model forms are reconstructed using neural networks (NN) and embedded within a standard solver, we show that much improved predictions in lift can be obtained for geometries and flow conditions that were not used to train the model. The NN-augmented SA model also predicts surface pressures extremely well. Portability of this approach is demonstrated by confirming that predictive improvements are preserved when the augmentation is embedded in a different commercial finite-element solver. The broader vision is that by incorporating data that can reveal the form of the innate model discrepancy, the applicability of data-driven turbulence models can be extended to more general flows.
[ { "created": "Sat, 13 Aug 2016 15:07:50 GMT", "version": "v1" }, { "created": "Tue, 16 Aug 2016 13:56:32 GMT", "version": "v2" }, { "created": "Sun, 6 Nov 2016 23:42:22 GMT", "version": "v3" } ]
2016-11-08
[ [ "Singh", "Anand Pratap", "" ], [ "Medida", "Shivaji", "" ], [ "Duraisamy", "Karthik", "" ] ]
A modeling paradigm is developed to augment predictive models of turbulence by effectively utilizing limited data generated from physical experiments. The key components of our approach involve inverse modeling to infer the spatial distribution of model discrepancies, and, machine learning to reconstruct discrepancy information from a large number of inverse problems into corrective model forms. We apply the methodology to turbulent flows over airfoils involving flow separation. Model augmentations are developed for the Spalart Allmaras (SA) model using adjoint-based full field inference on experimentally measured lift coefficient data. When these model forms are reconstructed using neural networks (NN) and embedded within a standard solver, we show that much improved predictions in lift can be obtained for geometries and flow conditions that were not used to train the model. The NN-augmented SA model also predicts surface pressures extremely well. Portability of this approach is demonstrated by confirming that predictive improvements are preserved when the augmentation is embedded in a different commercial finite-element solver. The broader vision is that by incorporating data that can reveal the form of the innate model discrepancy, the applicability of data-driven turbulence models can be extended to more general flows.
1004.4944
Stefano Rini
Stefano Rini, Daniela Tuninetti, and Natasha Devroye
Outer Bounds for the Interference Channel with a Cognitive Relay
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we first present an outer bound for a general interference channel with a cognitive relay, i.e., a relay that has non-causal knowledge of both independent messages transmitted in the interference channel. This outer bound reduces to the capacity region of the deterministic broadcast channel and of the deterministic cognitive interference channel through nulling of certain channel inputs. It does not, however, reduce to that of certain deterministic interference channels for which capacity is known. As such, we subsequently tighten the bound for channels whose outputs satisfy an "invertibility" condition. This second outer bound now reduces to the capacity of this special class of deterministic interference channels. The second outer bound is further tightened for the high SNR deterministic approximation of the Gaussian interference channel with a cognitive relay by exploiting the special structure of the interference. We provide an example that suggests that this third bound is tight in at least some parameter regimes for the high SNR deterministic approximation of the Gaussian channel. Another example shows that the third bound is capacity in the special case where there are no direct links between the non-cognitive transmitters.
[ { "created": "Wed, 28 Apr 2010 02:53:34 GMT", "version": "v1" }, { "created": "Tue, 18 May 2010 19:35:04 GMT", "version": "v2" } ]
2015-03-17
[ [ "Rini", "Stefano", "" ], [ "Tuninetti", "Daniela", "" ], [ "Devroye", "Natasha", "" ] ]
In this paper, we first present an outer bound for a general interference channel with a cognitive relay, i.e., a relay that has non-causal knowledge of both independent messages transmitted in the interference channel. This outer bound reduces to the capacity region of the deterministic broadcast channel and of the deterministic cognitive interference channel through nulling of certain channel inputs. It does not, however, reduce to that of certain deterministic interference channels for which capacity is known. As such, we subsequently tighten the bound for channels whose outputs satisfy an "invertibility" condition. This second outer bound now reduces to the capacity of this special class of deterministic interference channels. The second outer bound is further tightened for the high SNR deterministic approximation of the Gaussian interference channel with a cognitive relay by exploiting the special structure of the interference. We provide an example that suggests that this third bound is tight in at least some parameter regimes for the high SNR deterministic approximation of the Gaussian channel. Another example shows that the third bound is capacity in the special case where there are no direct links between the non-cognitive transmitters.
1807.02420
Yuexiang Li
Yuexiang Li, Xinpeng Xie, Linlin Shen and Shaoxiong Liu
Reversed Active Learning based Atrous DenseNet for Pathological Image Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Witnessed the development of deep learning in recent years, increasing number of researches try to adopt deep learning model for medical image analysis. However, the usage of deep learning networks for the pathological image analysis encounters several challenges, e.g. high resolution (gigapixel) of pathological images and lack of annotations of cancer areas. To address the challenges, we proposed a complete framework for the pathological image classification, which consists of a novel training strategy, namely reversed active learning (RAL), and an advanced network, namely atrous DenseNet (ADN). The proposed RAL can remove the mislabel patches in the training set. The refined training set can then be used to train widely used deep learning networks, e.g. VGG-16, ResNets, etc. A novel deep learning network, i.e. atrous DenseNet (ADN), is also proposed for the classification of pathological images. The proposed ADN achieves multi-scale feature extraction by integrating the atrous convolutions to the Dense Block. The proposed RAL and ADN have been evaluated on two pathological datasets, i.e. BACH and CCG. The experimental results demonstrate the excellent performance of the proposed ADN + RAL framework, i.e. the average patch-level ACAs of 94.10% and 92.05% on BACH and CCG validation sets were achieved.
[ { "created": "Fri, 6 Jul 2018 13:57:48 GMT", "version": "v1" } ]
2018-07-09
[ [ "Li", "Yuexiang", "" ], [ "Xie", "Xinpeng", "" ], [ "Shen", "Linlin", "" ], [ "Liu", "Shaoxiong", "" ] ]
Witnessed the development of deep learning in recent years, increasing number of researches try to adopt deep learning model for medical image analysis. However, the usage of deep learning networks for the pathological image analysis encounters several challenges, e.g. high resolution (gigapixel) of pathological images and lack of annotations of cancer areas. To address the challenges, we proposed a complete framework for the pathological image classification, which consists of a novel training strategy, namely reversed active learning (RAL), and an advanced network, namely atrous DenseNet (ADN). The proposed RAL can remove the mislabel patches in the training set. The refined training set can then be used to train widely used deep learning networks, e.g. VGG-16, ResNets, etc. A novel deep learning network, i.e. atrous DenseNet (ADN), is also proposed for the classification of pathological images. The proposed ADN achieves multi-scale feature extraction by integrating the atrous convolutions to the Dense Block. The proposed RAL and ADN have been evaluated on two pathological datasets, i.e. BACH and CCG. The experimental results demonstrate the excellent performance of the proposed ADN + RAL framework, i.e. the average patch-level ACAs of 94.10% and 92.05% on BACH and CCG validation sets were achieved.
2005.09057
Kevin Moran P
Carlos Bernal-C\'ardenas, Nathan Cooper, Kevin Moran, Oscar Chaparro, Andrian Marcus and Denys Poshyvanyk
Translating Video Recordings of Mobile App Usages into Replayable Scenarios
In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 pages
null
10.1145/3377811.3380328
null
cs.SE cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing $\approx$ 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.
[ { "created": "Mon, 18 May 2020 20:11:36 GMT", "version": "v1" } ]
2020-05-20
[ [ "Bernal-Cárdenas", "Carlos", "" ], [ "Cooper", "Nathan", "" ], [ "Moran", "Kevin", "" ], [ "Chaparro", "Oscar", "" ], [ "Marcus", "Andrian", "" ], [ "Poshyvanyk", "Denys", "" ] ]
Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing $\approx$ 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.
1706.07363
Mary Ann Weitnauer
Mary Ann Weitnauer, Jennifer Rexford, Nicholas Laneman, Matthieu Bloch, Santiago Griljava, Catherine Ross, and Gee-Kung Chang
Smart Wireless Communication is the Cornerstone of Smart Infrastructures
A Computing Community Consortium (CCC) white paper, 5 pages
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging smart infrastructures, such as Smart City, Smart Grid, Smart Health, and Smart Transportation, need smart wireless connectivity. However, the requirements of these smart infrastructures cannot be met with today's wireless networks. A new wireless infrastructure is needed to meet unprecedented needs in terms of agility, reliability, security, scalability, and partnerships. We are at the beginning of a revolution in how we live with technology, resulting from a convergence of machine learning (ML), the Internet-of-Things (IoT), and robotics. A smart infrastructure monitors and processes a vast amount of data, collected from a dense and wide distribution of heterogeneous sensors (e.g., the IoT), as well as from web applications like social media. In real time, using machine learning, patterns and relationships in the data over space, time, and application can be detected and predictions can be made; on the basis of these, resources can be managed, decisions can be made, and devices can be actuated to optimize metrics, such as cost, health, safety, and convenience.
[ { "created": "Thu, 22 Jun 2017 15:19:16 GMT", "version": "v1" } ]
2017-06-23
[ [ "Weitnauer", "Mary Ann", "" ], [ "Rexford", "Jennifer", "" ], [ "Laneman", "Nicholas", "" ], [ "Bloch", "Matthieu", "" ], [ "Griljava", "Santiago", "" ], [ "Ross", "Catherine", "" ], [ "Chang", "Gee-Kung", "" ] ]
Emerging smart infrastructures, such as Smart City, Smart Grid, Smart Health, and Smart Transportation, need smart wireless connectivity. However, the requirements of these smart infrastructures cannot be met with today's wireless networks. A new wireless infrastructure is needed to meet unprecedented needs in terms of agility, reliability, security, scalability, and partnerships. We are at the beginning of a revolution in how we live with technology, resulting from a convergence of machine learning (ML), the Internet-of-Things (IoT), and robotics. A smart infrastructure monitors and processes a vast amount of data, collected from a dense and wide distribution of heterogeneous sensors (e.g., the IoT), as well as from web applications like social media. In real time, using machine learning, patterns and relationships in the data over space, time, and application can be detected and predictions can be made; on the basis of these, resources can be managed, decisions can be made, and devices can be actuated to optimize metrics, such as cost, health, safety, and convenience.
2209.00296
Jianchuan Ding
Jianchuan Ding, Lingping Gao, Wenxi Liu, Haiyin Piao, Jia Pan, Zhenjun Du, Xin Yang, Baocai Yin
Monocular Camera-based Complex Obstacle Avoidance via Efficient Deep Reinforcement Learning
arXiv admin note: substantial text overlap with arXiv:2108.06887
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep reinforcement learning has achieved great success in laser-based collision avoidance works because the laser can sense accurate depth information without too much redundant data, which can maintain the robustness of the algorithm when it is migrated from the simulation environment to the real world. However, high-cost laser devices are not only difficult to deploy for a large scale of robots but also demonstrate unsatisfactory robustness towards the complex obstacles, including irregular obstacles, e.g., tables, chairs, and shelves, as well as complex ground and special materials. In this paper, we propose a novel monocular camera-based complex obstacle avoidance framework. Particularly, we innovatively transform the captured RGB images to pseudo-laser measurements for efficient deep reinforcement learning. Compared to the traditional laser measurement captured at a certain height that only contains one-dimensional distance information away from the neighboring obstacles, our proposed pseudo-laser measurement fuses the depth and semantic information of the captured RGB image, which makes our method effective for complex obstacles. We also design a feature extraction guidance module to weight the input pseudo-laser measurement, and the agent has more reasonable attention for the current state, which is conducive to improving the accuracy and efficiency of the obstacle avoidance policy.
[ { "created": "Thu, 1 Sep 2022 08:58:40 GMT", "version": "v1" } ]
2022-09-02
[ [ "Ding", "Jianchuan", "" ], [ "Gao", "Lingping", "" ], [ "Liu", "Wenxi", "" ], [ "Piao", "Haiyin", "" ], [ "Pan", "Jia", "" ], [ "Du", "Zhenjun", "" ], [ "Yang", "Xin", "" ], [ "Yin", "Baocai", "" ] ]
Deep reinforcement learning has achieved great success in laser-based collision avoidance works because the laser can sense accurate depth information without too much redundant data, which can maintain the robustness of the algorithm when it is migrated from the simulation environment to the real world. However, high-cost laser devices are not only difficult to deploy for a large scale of robots but also demonstrate unsatisfactory robustness towards the complex obstacles, including irregular obstacles, e.g., tables, chairs, and shelves, as well as complex ground and special materials. In this paper, we propose a novel monocular camera-based complex obstacle avoidance framework. Particularly, we innovatively transform the captured RGB images to pseudo-laser measurements for efficient deep reinforcement learning. Compared to the traditional laser measurement captured at a certain height that only contains one-dimensional distance information away from the neighboring obstacles, our proposed pseudo-laser measurement fuses the depth and semantic information of the captured RGB image, which makes our method effective for complex obstacles. We also design a feature extraction guidance module to weight the input pseudo-laser measurement, and the agent has more reasonable attention for the current state, which is conducive to improving the accuracy and efficiency of the obstacle avoidance policy.
1504.03182
Amaro Barreal
Amaro Barreal, Joonas P\"a\"akk\"onen, David Karpuk, Camilla Hollanti, Olav Tirkkonen
A Low-Complexity Message Recovery Method for Compute-and-Forward Relaying
5 figures, 5 pages, submitted
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Compute-and-Forward relaying strategy achieves high computation rates by decoding linear combinations of transmitted messages at intermediate relays. However, if the involved relays independently choose which combinations of the messages to decode, there is no guarantee that the overall system of linear equations is solvable at the destination. In this article it is shown that, for a Gaussian fading channel model with two transmitters and two relays, always choosing the combination that maximizes the computation rate often leads to a case where the original messages cannot be recovered. It is further shown that by limiting the relays to select from carefully designed sets of equations, a solvable system can be guaranteed while maintaining high computation rates. The proposed method has a constant computational complexity and requires no information exchange between the relays.
[ { "created": "Mon, 13 Apr 2015 13:48:59 GMT", "version": "v1" }, { "created": "Mon, 20 Apr 2015 06:29:41 GMT", "version": "v2" } ]
2015-04-21
[ [ "Barreal", "Amaro", "" ], [ "Pääkkönen", "Joonas", "" ], [ "Karpuk", "David", "" ], [ "Hollanti", "Camilla", "" ], [ "Tirkkonen", "Olav", "" ] ]
The Compute-and-Forward relaying strategy achieves high computation rates by decoding linear combinations of transmitted messages at intermediate relays. However, if the involved relays independently choose which combinations of the messages to decode, there is no guarantee that the overall system of linear equations is solvable at the destination. In this article it is shown that, for a Gaussian fading channel model with two transmitters and two relays, always choosing the combination that maximizes the computation rate often leads to a case where the original messages cannot be recovered. It is further shown that by limiting the relays to select from carefully designed sets of equations, a solvable system can be guaranteed while maintaining high computation rates. The proposed method has a constant computational complexity and requires no information exchange between the relays.
2407.11837
Hongwei Li
Hongwei Li, Zoran Radivojevic, and Michael S. Eggleston
The Patchkeeper: An Integrated Wearable Electronic Stethoscope with Multiple Sensors
Submitted for IEEE Sensors Conference 2024
null
null
null
cs.HC eess.SP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Many parts of human body generate internal sound during biological processes, which are rich sources of information for understanding health and wellbeing. Despite a long history of development and usage of stethoscopes, there is still a lack of proper tools for recording internal body sound together with complementary sensors for long term monitoring. In this paper, we show our development of a wearable electronic stethoscope, coined Patchkeeper (PK), that can be used for internal body sound recording over long periods of time. Patchkeeper also integrates several state-of-the-art biological sensors, including electrocardiogram (ECG), photoplethysmography (PPG), and inertial measurement unit (IMU) sensors. As a wearable device, Patchkeeper can be placed on various parts of the body to collect sound from particular organs, including heart, lung, stomach, and joints etc. We show in this paper that several vital signals can be recorded simultaneously with high quality. As Patchkeeper can be operated directly by the user, e.g. without involving health care professionals, we believe it could be a useful tool for telemedicine and remote diagnostics.
[ { "created": "Tue, 16 Jul 2024 15:22:10 GMT", "version": "v1" } ]
2024-07-17
[ [ "Li", "Hongwei", "" ], [ "Radivojevic", "Zoran", "" ], [ "Eggleston", "Michael S.", "" ] ]
Many parts of human body generate internal sound during biological processes, which are rich sources of information for understanding health and wellbeing. Despite a long history of development and usage of stethoscopes, there is still a lack of proper tools for recording internal body sound together with complementary sensors for long term monitoring. In this paper, we show our development of a wearable electronic stethoscope, coined Patchkeeper (PK), that can be used for internal body sound recording over long periods of time. Patchkeeper also integrates several state-of-the-art biological sensors, including electrocardiogram (ECG), photoplethysmography (PPG), and inertial measurement unit (IMU) sensors. As a wearable device, Patchkeeper can be placed on various parts of the body to collect sound from particular organs, including heart, lung, stomach, and joints etc. We show in this paper that several vital signals can be recorded simultaneously with high quality. As Patchkeeper can be operated directly by the user, e.g. without involving health care professionals, we believe it could be a useful tool for telemedicine and remote diagnostics.