id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2303.11315
Wenxuan Zhou
Wenxuan Zhou, Sheng Zhang, Hoifung Poon, Muhao Chen
Context-faithful Prompting for Large Language Models
Accepted at EMNLP 2023 Findings. Code and data are released at https://github.com/wzhouad/context-faithful-llm
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts. Code and data are released at https://github.com/wzhouad/context-faithful-llm.
[ { "created": "Mon, 20 Mar 2023 17:54:58 GMT", "version": "v1" }, { "created": "Mon, 23 Oct 2023 03:25:13 GMT", "version": "v2" } ]
2023-10-24
[ [ "Zhou", "Wenxuan", "" ], [ "Zhang", "Sheng", "" ], [ "Poon", "Hoifung", "" ], [ "Chen", "Muhao", "" ] ]
Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts. Code and data are released at https://github.com/wzhouad/context-faithful-llm.
1804.03357
Yasushi Tanaka
Yasushi Tanaka, Hajimu Iida, Yasuhiro Takemura
A Manga-Driven System Requirements Development PBL Exercise
SEEM2018
null
10.1145/3194779.3194788
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We conducted a Project-Based Learning (PBL)-type exercise incorporating Japanese cartoon (manga) techniques into Requirements Development (RD) processes. Manga has established techniques, such as those for character setting and story development, that we thought are also valid for RD processes. Using this manga-driven method, students were able to clarify high-level project goals early in the development life-cycle, and succeeded in defining high quality and unique system ideas.
[ { "created": "Tue, 10 Apr 2018 06:26:20 GMT", "version": "v1" } ]
2018-04-11
[ [ "Tanaka", "Yasushi", "" ], [ "Iida", "Hajimu", "" ], [ "Takemura", "Yasuhiro", "" ] ]
We conducted a Project-Based Learning (PBL)-type exercise incorporating Japanese cartoon (manga) techniques into Requirements Development (RD) processes. Manga has established techniques, such as those for character setting and story development, that we thought are also valid for RD processes. Using this manga-driven method, students were able to clarify high-level project goals early in the development life-cycle, and succeeded in defining high quality and unique system ideas.
2408.05452
Junjie Jiang
Junjie Jiang, Hao Zhuang, Xinjie Huang, Delei Kong, Zheng Fang
EV-MGDispNet: Motion-Guided Event-Based Stereo Disparity Estimation Network with Left-Right Consistency
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event cameras have the potential to revolutionize the field of robot vision, particularly in areas like stereo disparity estimation, owing to their high temporal resolution and high dynamic range. Many studies use deep learning for event camera stereo disparity estimation. However, these methods fail to fully exploit the temporal information in the event stream to acquire clear event representations. Additionally, there is room for further reduction in pixel shifts in the feature maps before constructing the cost volume. In this paper, we propose EV-MGDispNet, a novel event-based stereo disparity estimation method. Firstly, we propose an edge-aware aggregation (EAA) module, which fuses event frames and motion confidence maps to generate a novel clear event representation. Then, we propose a motion-guided attention (MGA) module, where motion confidence maps utilize deformable transformer encoders to enhance the feature map with more accurate edges. Finally, we also add a census left-right consistency loss function to enhance the left-right consistency of stereo event representation. Through conducting experiments within challenging real-world driving scenarios, we validate that our method outperforms currently known state-of-the-art methods in terms of mean absolute error (MAE) and root mean square error (RMSE) metrics.
[ { "created": "Sat, 10 Aug 2024 06:13:37 GMT", "version": "v1" } ]
2024-08-13
[ [ "Jiang", "Junjie", "" ], [ "Zhuang", "Hao", "" ], [ "Huang", "Xinjie", "" ], [ "Kong", "Delei", "" ], [ "Fang", "Zheng", "" ] ]
Event cameras have the potential to revolutionize the field of robot vision, particularly in areas like stereo disparity estimation, owing to their high temporal resolution and high dynamic range. Many studies use deep learning for event camera stereo disparity estimation. However, these methods fail to fully exploit the temporal information in the event stream to acquire clear event representations. Additionally, there is room for further reduction in pixel shifts in the feature maps before constructing the cost volume. In this paper, we propose EV-MGDispNet, a novel event-based stereo disparity estimation method. Firstly, we propose an edge-aware aggregation (EAA) module, which fuses event frames and motion confidence maps to generate a novel clear event representation. Then, we propose a motion-guided attention (MGA) module, where motion confidence maps utilize deformable transformer encoders to enhance the feature map with more accurate edges. Finally, we also add a census left-right consistency loss function to enhance the left-right consistency of stereo event representation. Through conducting experiments within challenging real-world driving scenarios, we validate that our method outperforms currently known state-of-the-art methods in terms of mean absolute error (MAE) and root mean square error (RMSE) metrics.
2404.06357
Hyewon Jang
Hyewon Jang, Diego Frassinelli
Generalizable Sarcasm Detection Is Just Around The Corner, Of Course!
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking). We tested their prediction performance on the same dataset (intra-dataset) and across different datasets (cross-dataset). For intra-dataset predictions, models consistently performed better when fine-tuned with third-party labels rather than with author labels. For cross-dataset predictions, most models failed to generalize well to the other datasets, implying that one type of dataset cannot represent all sorts of sarcasm with different styles and domains. Compared to the existing datasets, models fine-tuned on the new dataset we release in this work showed the highest generalizability to other datasets. With a manual inspection of the datasets and post-hoc analysis, we attributed the difficulty in generalization to the fact that sarcasm actually comes in different domains and styles. We argue that future sarcasm research should take the broad scope of sarcasm into account.
[ { "created": "Tue, 9 Apr 2024 14:48:32 GMT", "version": "v1" }, { "created": "Wed, 10 Apr 2024 07:48:08 GMT", "version": "v2" } ]
2024-04-11
[ [ "Jang", "Hyewon", "" ], [ "Frassinelli", "Diego", "" ] ]
We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking). We tested their prediction performance on the same dataset (intra-dataset) and across different datasets (cross-dataset). For intra-dataset predictions, models consistently performed better when fine-tuned with third-party labels rather than with author labels. For cross-dataset predictions, most models failed to generalize well to the other datasets, implying that one type of dataset cannot represent all sorts of sarcasm with different styles and domains. Compared to the existing datasets, models fine-tuned on the new dataset we release in this work showed the highest generalizability to other datasets. With a manual inspection of the datasets and post-hoc analysis, we attributed the difficulty in generalization to the fact that sarcasm actually comes in different domains and styles. We argue that future sarcasm research should take the broad scope of sarcasm into account.
2402.05012
Amir K. Khandani Dr.
Amir K. Khandani
Information Theoretically Secure Encryption Key Generation over Wireless Networks by Exploiting Packet Errors
null
null
null
null
cs.IT cs.CR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents a novel method for establishing an information theoretically secure encryption key over wireless channels. It exploits the fact that data transmission over wireless links is accompanied by packet error, while noise terms, and thereby the error events observed by two separate receivers are independent of each other. A number of data packets, with random data, are transmitted from a first legitimate node, say Alice, to a second legitimate node, say Bob. Bob identifies all packets that are received error-free in the first transmission attempt and sends their indices to Alice over a public channel. Then, both Alice and Bob mix the contents of identified packets, e.g., using a hash function, and thereby derive an identical encryption key. Since error events from Alice to Bob is independent of error events from Alice to Eve, the chances that Eve has successfully received all packets used in key generation error-free diminishes as the number of packet increases. In many wireless standards, the first stage in error detection and Automatic Repeat Request (ARQ) is deployed at the PHY/MAC (Physical Layer/Medium Access Control) layer. In such setups, the first re-transmission is manged by the PHY/MAC layer without informing higher layers. This makes it impossible to directly access the information related to packet errors through high-level programming interfaces available to an end-user. A method is presented for determining packets received error-free in first transmission attempts through high-level programming. Examples are presented in conjunction with an LTE cellular network.
[ { "created": "Wed, 7 Feb 2024 16:32:13 GMT", "version": "v1" } ]
2024-02-08
[ [ "Khandani", "Amir K.", "" ] ]
This article presents a novel method for establishing an information theoretically secure encryption key over wireless channels. It exploits the fact that data transmission over wireless links is accompanied by packet error, while noise terms, and thereby the error events observed by two separate receivers are independent of each other. A number of data packets, with random data, are transmitted from a first legitimate node, say Alice, to a second legitimate node, say Bob. Bob identifies all packets that are received error-free in the first transmission attempt and sends their indices to Alice over a public channel. Then, both Alice and Bob mix the contents of identified packets, e.g., using a hash function, and thereby derive an identical encryption key. Since error events from Alice to Bob is independent of error events from Alice to Eve, the chances that Eve has successfully received all packets used in key generation error-free diminishes as the number of packet increases. In many wireless standards, the first stage in error detection and Automatic Repeat Request (ARQ) is deployed at the PHY/MAC (Physical Layer/Medium Access Control) layer. In such setups, the first re-transmission is manged by the PHY/MAC layer without informing higher layers. This makes it impossible to directly access the information related to packet errors through high-level programming interfaces available to an end-user. A method is presented for determining packets received error-free in first transmission attempts through high-level programming. Examples are presented in conjunction with an LTE cellular network.
2312.10019
Kwanghee Choi
Kwanghee Choi, Jee-weon Jung, Shinji Watanabe
Understanding Probe Behaviors through Variational Bounds of Mutual Information
Accepted to ICASSP 2024, implementation available at https://github.com/juice500ml/information_probing
null
null
null
cs.IT cs.LG eess.AS math.IT
http://creativecommons.org/licenses/by/4.0/
With the success of self-supervised representations, researchers seek a better understanding of the information encapsulated within a representation. Among various interpretability methods, we focus on classification-based linear probing. We aim to foster a solid understanding and provide guidelines for linear probing by constructing a novel mathematical framework leveraging information theory. First, we connect probing with the variational bounds of mutual information (MI) to relax the probe design, equating linear probing with fine-tuning. Then, we investigate empirical behaviors and practices of probing through our mathematical framework. We analyze the layer-wise performance curve being convex, which seemingly violates the data processing inequality. However, we show that the intermediate representations can have the biggest MI estimate because of the tradeoff between better separability and decreasing MI. We further suggest that the margin of linearly separable representations can be a criterion for measuring the "goodness of representation." We also compare accuracy with MI as the measuring criteria. Finally, we empirically validate our claims by observing the self-supervised speech models on retaining word and phoneme information.
[ { "created": "Fri, 15 Dec 2023 18:38:18 GMT", "version": "v1" } ]
2023-12-18
[ [ "Choi", "Kwanghee", "" ], [ "Jung", "Jee-weon", "" ], [ "Watanabe", "Shinji", "" ] ]
With the success of self-supervised representations, researchers seek a better understanding of the information encapsulated within a representation. Among various interpretability methods, we focus on classification-based linear probing. We aim to foster a solid understanding and provide guidelines for linear probing by constructing a novel mathematical framework leveraging information theory. First, we connect probing with the variational bounds of mutual information (MI) to relax the probe design, equating linear probing with fine-tuning. Then, we investigate empirical behaviors and practices of probing through our mathematical framework. We analyze the layer-wise performance curve being convex, which seemingly violates the data processing inequality. However, we show that the intermediate representations can have the biggest MI estimate because of the tradeoff between better separability and decreasing MI. We further suggest that the margin of linearly separable representations can be a criterion for measuring the "goodness of representation." We also compare accuracy with MI as the measuring criteria. Finally, we empirically validate our claims by observing the self-supervised speech models on retaining word and phoneme information.
0801.0455
Jorg Liebeherr
Jorg Liebeherr, Markus Fidler, Shahrokh Valaee
A System Theoretic Approach to Bandwidth Estimation
23 pages
null
null
null
cs.NI cs.PF
null
It is shown that bandwidth estimation in packet networks can be viewed in terms of min-plus linear system theory. The available bandwidth of a link or complete path is expressed in terms of a {\em service curve}, which is a function that appears in the network calculus to express the service available to a traffic flow. The service curve is estimated based on measurements of a sequence of probing packets or passive measurements of a sample path of arrivals. It is shown that existing bandwidth estimation methods can be derived in the min-plus algebra of the network calculus, thus providing further mathematical justification for these methods. Principal difficulties of estimating available bandwidth from measurement of network probes are related to potential non-linearities of the underlying network. When networks are viewed as systems that operate either in a linear or in a non-linear regime, it is argued that probing schemes extract the most information at a point when the network crosses from a linear to a non-linear regime. Experiments on the Emulab testbed at the University of Utah evaluate the robustness of the system theoretic interpretation of networks in practice. Multi-node experiments evaluate how well the convolution operation of the min-plus algebra provides estimates for the available bandwidth of a path from estimates of individual links.
[ { "created": "Thu, 3 Jan 2008 00:11:26 GMT", "version": "v1" } ]
2008-01-04
[ [ "Liebeherr", "Jorg", "" ], [ "Fidler", "Markus", "" ], [ "Valaee", "Shahrokh", "" ] ]
It is shown that bandwidth estimation in packet networks can be viewed in terms of min-plus linear system theory. The available bandwidth of a link or complete path is expressed in terms of a {\em service curve}, which is a function that appears in the network calculus to express the service available to a traffic flow. The service curve is estimated based on measurements of a sequence of probing packets or passive measurements of a sample path of arrivals. It is shown that existing bandwidth estimation methods can be derived in the min-plus algebra of the network calculus, thus providing further mathematical justification for these methods. Principal difficulties of estimating available bandwidth from measurement of network probes are related to potential non-linearities of the underlying network. When networks are viewed as systems that operate either in a linear or in a non-linear regime, it is argued that probing schemes extract the most information at a point when the network crosses from a linear to a non-linear regime. Experiments on the Emulab testbed at the University of Utah evaluate the robustness of the system theoretic interpretation of networks in practice. Multi-node experiments evaluate how well the convolution operation of the min-plus algebra provides estimates for the available bandwidth of a path from estimates of individual links.
2011.05431
Nikolaos Stylianou
Nikolaos Stylianou, Ioannis Vlahavas
E.T.: Entity-Transformers. Coreference augmented Neural Language Model for richer mention representations via Entity-Transformer blocks
10 pages, 4 figures, 5 tables, accepted at CRAC2020
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In the last decade, the field of Neural Language Modelling has witnessed enormous changes, with the development of novel models through the use of Transformer architectures. However, even these models struggle to model long sequences due to memory constraints and increasing computational complexity. Coreference annotations over the training data can provide context far beyond the modelling limitations of such language models. In this paper we present an extension over the Transformer-block architecture used in neural language models, specifically in GPT2, in order to incorporate entity annotations during training. Our model, GPT2E, extends the Transformer layers architecture of GPT2 to Entity-Transformers, an architecture designed to handle coreference information when present. To that end, we achieve richer representations for entity mentions, with insignificant training cost. We show the comparative model performance between GPT2 and GPT2E in terms of Perplexity on the CoNLL 2012 and LAMBADA datasets as well as the key differences in the entity representations and their effects in downstream tasks such as Named Entity Recognition. Furthermore, our approach can be adopted by the majority of Transformer-based language models.
[ { "created": "Tue, 10 Nov 2020 22:28:00 GMT", "version": "v1" } ]
2020-11-12
[ [ "Stylianou", "Nikolaos", "" ], [ "Vlahavas", "Ioannis", "" ] ]
In the last decade, the field of Neural Language Modelling has witnessed enormous changes, with the development of novel models through the use of Transformer architectures. However, even these models struggle to model long sequences due to memory constraints and increasing computational complexity. Coreference annotations over the training data can provide context far beyond the modelling limitations of such language models. In this paper we present an extension over the Transformer-block architecture used in neural language models, specifically in GPT2, in order to incorporate entity annotations during training. Our model, GPT2E, extends the Transformer layers architecture of GPT2 to Entity-Transformers, an architecture designed to handle coreference information when present. To that end, we achieve richer representations for entity mentions, with insignificant training cost. We show the comparative model performance between GPT2 and GPT2E in terms of Perplexity on the CoNLL 2012 and LAMBADA datasets as well as the key differences in the entity representations and their effects in downstream tasks such as Named Entity Recognition. Furthermore, our approach can be adopted by the majority of Transformer-based language models.
1409.5223
Ben Ruijl
Ben Ruijl, Aske Plaat, Jos Vermaseren, Jaap van den Herik
Why Local Search Excels in Expression Simplification
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simplifying expressions is important to make numerical integration of large expressions from High Energy Physics tractable. To this end, Horner's method can be used. Finding suitable Horner schemes is assumed to be hard, due to the lack of local heuristics. Recently, MCTS was reported to be able to find near optimal schemes. However, several parameters had to be fine-tuned manually. In this work, we investigate the state space properties of Horner schemes and find that the domain is relatively flat and contains only a few local minima. As a result, the Horner space is appropriate to be explored by Stochastic Local Search (SLS), which has only two parameters: the number of iterations (computation time) and the neighborhood structure. We found a suitable neighborhood structure, leaving only the allowed computation time as a parameter. We performed a range of experiments. The results obtained by SLS are similar or better than those obtained by MCTS. Furthermore, we show that SLS obtains the good results at least 10 times faster. Using SLS, we can speed up numerical integration of many real-world large expressions by at least a factor of 24. For High Energy Physics this means that numerical integrations that took weeks can now be done in hours.
[ { "created": "Thu, 18 Sep 2014 08:21:25 GMT", "version": "v1" } ]
2014-09-19
[ [ "Ruijl", "Ben", "" ], [ "Plaat", "Aske", "" ], [ "Vermaseren", "Jos", "" ], [ "Herik", "Jaap van den", "" ] ]
Simplifying expressions is important to make numerical integration of large expressions from High Energy Physics tractable. To this end, Horner's method can be used. Finding suitable Horner schemes is assumed to be hard, due to the lack of local heuristics. Recently, MCTS was reported to be able to find near optimal schemes. However, several parameters had to be fine-tuned manually. In this work, we investigate the state space properties of Horner schemes and find that the domain is relatively flat and contains only a few local minima. As a result, the Horner space is appropriate to be explored by Stochastic Local Search (SLS), which has only two parameters: the number of iterations (computation time) and the neighborhood structure. We found a suitable neighborhood structure, leaving only the allowed computation time as a parameter. We performed a range of experiments. The results obtained by SLS are similar or better than those obtained by MCTS. Furthermore, we show that SLS obtains the good results at least 10 times faster. Using SLS, we can speed up numerical integration of many real-world large expressions by at least a factor of 24. For High Energy Physics this means that numerical integrations that took weeks can now be done in hours.
2105.01772
Stephen Gilbert
Charles Peasley, Rachel Dianiska, Emily Oldham, Nicholas Wilson, Stephen Gilbert, Peggy Wu, Brett Israelsen, James Oliver
Evaluating Metrics for Standardized Benchmarking of Remote Presence Systems
null
null
null
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-sa/4.0/
To reduce the need for business-related air travel and its associated energy consumption and carbon footprint, the U.S. Department of Energy's ARPA-E is supporting a research project called SCOTTIE - Systematic Communication Objectives and Telecommunications Technology Investigations and Evaluations. SCOTTIE tests virtual and augmented reality platforms in a functional comparison with face-to-face (FtF) interactions to derive travel replacement thresholds for common industrial training scenarios. The primary goal of Study 1 is to match the communication effectiveness and learning outcomes obtained from a FtF control using virtual reality (VR) training scenarios in which a local expert with physical equipment trains a remote apprentice without physical equipment immediately present. This application scenario is commonplace in industrial settings where access to expensive equipment and materials is limited and a number of apprentices must travel to a central location in order to undergo training. Supplying an empirically validated virtual training alternative constitutes a readily adoptable use-case for businesses looking to reduce time and monetary expenditures associated with travel. The technology used for three different virtual presence technologies was strategically selected for feasibility, relatively low cost, business relevance, and potential for impact through transition. The authors suggest that the results of this study might generalize to the challenge of virtual conferences.
[ { "created": "Tue, 4 May 2021 21:36:53 GMT", "version": "v1" } ]
2021-05-06
[ [ "Peasley", "Charles", "" ], [ "Dianiska", "Rachel", "" ], [ "Oldham", "Emily", "" ], [ "Wilson", "Nicholas", "" ], [ "Gilbert", "Stephen", "" ], [ "Wu", "Peggy", "" ], [ "Israelsen", "Brett", "" ], [ "Oliver", "James", "" ] ]
To reduce the need for business-related air travel and its associated energy consumption and carbon footprint, the U.S. Department of Energy's ARPA-E is supporting a research project called SCOTTIE - Systematic Communication Objectives and Telecommunications Technology Investigations and Evaluations. SCOTTIE tests virtual and augmented reality platforms in a functional comparison with face-to-face (FtF) interactions to derive travel replacement thresholds for common industrial training scenarios. The primary goal of Study 1 is to match the communication effectiveness and learning outcomes obtained from a FtF control using virtual reality (VR) training scenarios in which a local expert with physical equipment trains a remote apprentice without physical equipment immediately present. This application scenario is commonplace in industrial settings where access to expensive equipment and materials is limited and a number of apprentices must travel to a central location in order to undergo training. Supplying an empirically validated virtual training alternative constitutes a readily adoptable use-case for businesses looking to reduce time and monetary expenditures associated with travel. The technology used for three different virtual presence technologies was strategically selected for feasibility, relatively low cost, business relevance, and potential for impact through transition. The authors suggest that the results of this study might generalize to the challenge of virtual conferences.
2303.04598
Agi Kurucz
Agi Kurucz, Frank Wolter, Michael Zakharyaschev
Deciding the Existence of Interpolants and Definitions in First-Order Modal Logic
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
None of the first-order modal logics between $\mathsf{K}$ and $\mathsf{S5}$ under the constant domain semantics enjoys Craig interpolation or projective Beth definability, even in the language restricted to a single individual variable. It follows that the existence of a Craig interpolant for a given implication or of an explicit definition for a given predicate cannot be directly reduced to validity as in classical first-order and many other logics. Our concern here is the decidability and computational complexity of the interpolant and definition existence problems. We first consider two decidable fragments of first-order modal logic $\mathsf{S5}$: the one-variable fragment $\mathsf{Q^1S5}$ and its extension $\mathsf{S5}_{\mathcal{ALC}^u}$ that combines $\mathsf{S5}$ and the description logic$\mathcal{ALC}$ with the universal role. We prove that interpolant and definition existence in $\mathsf{Q^1S5}$ and $\mathsf{S5}_{\mathcal{ALC}^u}$ is decidable in coN2ExpTime, being 2ExpTime-hard, while uniform interpolant existence is undecidable. These results transfer to the two-variable fragment $\mathsf{FO^2}$ of classical first-order logic without equality. We also show that interpolant and definition existence in the one-variable fragment $\mathsf{Q^1K}$ of first-order modal logic $\mathsf{K}$ is non-elementary decidable, while uniform interpolant existence is again undecidable.
[ { "created": "Wed, 8 Mar 2023 14:10:59 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 12:03:35 GMT", "version": "v2" } ]
2024-06-06
[ [ "Kurucz", "Agi", "" ], [ "Wolter", "Frank", "" ], [ "Zakharyaschev", "Michael", "" ] ]
None of the first-order modal logics between $\mathsf{K}$ and $\mathsf{S5}$ under the constant domain semantics enjoys Craig interpolation or projective Beth definability, even in the language restricted to a single individual variable. It follows that the existence of a Craig interpolant for a given implication or of an explicit definition for a given predicate cannot be directly reduced to validity as in classical first-order and many other logics. Our concern here is the decidability and computational complexity of the interpolant and definition existence problems. We first consider two decidable fragments of first-order modal logic $\mathsf{S5}$: the one-variable fragment $\mathsf{Q^1S5}$ and its extension $\mathsf{S5}_{\mathcal{ALC}^u}$ that combines $\mathsf{S5}$ and the description logic$\mathcal{ALC}$ with the universal role. We prove that interpolant and definition existence in $\mathsf{Q^1S5}$ and $\mathsf{S5}_{\mathcal{ALC}^u}$ is decidable in coN2ExpTime, being 2ExpTime-hard, while uniform interpolant existence is undecidable. These results transfer to the two-variable fragment $\mathsf{FO^2}$ of classical first-order logic without equality. We also show that interpolant and definition existence in the one-variable fragment $\mathsf{Q^1K}$ of first-order modal logic $\mathsf{K}$ is non-elementary decidable, while uniform interpolant existence is again undecidable.
2407.17316
Niklas B\"oing
N. B\"oing, J. Holke, C. Hergl, L. Spataro, G. Gassner, A. Basermann
Lossy Data Compression By Adaptive Mesh Coarsening
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Today's scientific simulations, for example in the high-performance exascale sector, produce huge amounts of data. Due to limited I/O bandwidth and available storage space, there is the necessity to reduce scientific data of high performance computing applications. Error-bounded lossy compression has been proven to be an effective approach tackling the trade-off between accuracy and storage space. Within this work, we are exploring and discussing error-bounded lossy compression solely based on adaptive mesh refinement techniques. This compression technique is not only easily integrated into existing adaptive mesh refinement applications but also suits as a general lossy compression approach for arbitrary data in form of multi-dimensional arrays, irrespective of the data type. Moreover, these techniques permit the exclusion of regions of interest and even allows for nested error domains during the compression. The described data compression technique is presented exemplary on ERA5 data.
[ { "created": "Wed, 24 Jul 2024 14:39:24 GMT", "version": "v1" } ]
2024-07-25
[ [ "Böing", "N.", "" ], [ "Holke", "J.", "" ], [ "Hergl", "C.", "" ], [ "Spataro", "L.", "" ], [ "Gassner", "G.", "" ], [ "Basermann", "A.", "" ] ]
Today's scientific simulations, for example in the high-performance exascale sector, produce huge amounts of data. Due to limited I/O bandwidth and available storage space, there is the necessity to reduce scientific data of high performance computing applications. Error-bounded lossy compression has been proven to be an effective approach tackling the trade-off between accuracy and storage space. Within this work, we are exploring and discussing error-bounded lossy compression solely based on adaptive mesh refinement techniques. This compression technique is not only easily integrated into existing adaptive mesh refinement applications but also suits as a general lossy compression approach for arbitrary data in form of multi-dimensional arrays, irrespective of the data type. Moreover, these techniques permit the exclusion of regions of interest and even allows for nested error domains during the compression. The described data compression technique is presented exemplary on ERA5 data.
2308.10457
Xinpeng Ling
Xinpeng Ling, Jie Fu, Kuncan Wang, Haitao Liu, Zhili Chen
ALI-DPFL: Differentially Private Federated Learning with Adaptive Local Iterations
null
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data. However, adversaries can still infer individual information through inference attacks (e.g. differential attacks) on these training parameters. As a result, Differential Privacy (DP) has been widely used in FL to prevent such attacks. We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained. By theoretically analyzing the convergence, we can find the optimal number of local DPSGD iterations for clients between any two sequential global updates. Based on this, we design an algorithm of Differentially Private Federated Learning with Adaptive Local Iterations (ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10 datasets, and demonstrate significantly better performances than previous work in the resource-constraint scenario. Code is available at https://github.com/cheng-t/ALI-DPFL.
[ { "created": "Mon, 21 Aug 2023 04:09:59 GMT", "version": "v1" }, { "created": "Thu, 21 Sep 2023 14:59:28 GMT", "version": "v2" }, { "created": "Fri, 22 Sep 2023 07:59:03 GMT", "version": "v3" }, { "created": "Sun, 25 Feb 2024 06:56:16 GMT", "version": "v4" }, { "created": "Sun, 24 Mar 2024 10:04:37 GMT", "version": "v5" }, { "created": "Tue, 23 Apr 2024 14:34:45 GMT", "version": "v6" }, { "created": "Wed, 24 Apr 2024 06:12:08 GMT", "version": "v7" }, { "created": "Fri, 17 May 2024 03:12:57 GMT", "version": "v8" }, { "created": "Wed, 22 May 2024 04:17:46 GMT", "version": "v9" } ]
2024-05-27
[ [ "Ling", "Xinpeng", "" ], [ "Fu", "Jie", "" ], [ "Wang", "Kuncan", "" ], [ "Liu", "Haitao", "" ], [ "Chen", "Zhili", "" ] ]
Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data. However, adversaries can still infer individual information through inference attacks (e.g. differential attacks) on these training parameters. As a result, Differential Privacy (DP) has been widely used in FL to prevent such attacks. We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained. By theoretically analyzing the convergence, we can find the optimal number of local DPSGD iterations for clients between any two sequential global updates. Based on this, we design an algorithm of Differentially Private Federated Learning with Adaptive Local Iterations (ALI-DPFL). We experiment our algorithm on the MNIST, FashionMNIST and Cifar10 datasets, and demonstrate significantly better performances than previous work in the resource-constraint scenario. Code is available at https://github.com/cheng-t/ALI-DPFL.
1707.00338
Luciana Foss
Leila Ribeiro, Luciana Foss, Simone Andr\'e da Costa Cavalheiro
Entendendo o Pensamento Computacional
18 pages, in Portuguese
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this article is to clarify the meaning of Computational Thinking. We differentiate logical from computational reasoning and discuss the importance of Computational Thinking in solving problems. The three pillars of Computational Thinking - Abstraction, Automation and Analysis - are outlined, highlighting the role of each one in developing the skills needed for the problem-solving process. ----- O objetivo deste artigo \'e esclarecer o significado de Pensamento Computacional. Diferencia-se o racioc\'inio l\'ogico do computacional e discute-se a import\^ancia do Pensamento Computacional na resolu\c{c}\~ao de problemas. Os tr\^es pilares do Pensamento Computacional - Abstra\c{c}\~ao, Automa\c{c}\~ao e An\'alise - s\~ao delineados, destacando-se o papel de cada um deles no desenvolvimento das habilidades necess\'arias para o processo de solu\c{c}\~ao de problemas.
[ { "created": "Sun, 2 Jul 2017 19:38:55 GMT", "version": "v1" } ]
2017-07-04
[ [ "Ribeiro", "Leila", "" ], [ "Foss", "Luciana", "" ], [ "Cavalheiro", "Simone André da Costa", "" ] ]
The goal of this article is to clarify the meaning of Computational Thinking. We differentiate logical from computational reasoning and discuss the importance of Computational Thinking in solving problems. The three pillars of Computational Thinking - Abstraction, Automation and Analysis - are outlined, highlighting the role of each one in developing the skills needed for the problem-solving process. ----- O objetivo deste artigo \'e esclarecer o significado de Pensamento Computacional. Diferencia-se o racioc\'inio l\'ogico do computacional e discute-se a import\^ancia do Pensamento Computacional na resolu\c{c}\~ao de problemas. Os tr\^es pilares do Pensamento Computacional - Abstra\c{c}\~ao, Automa\c{c}\~ao e An\'alise - s\~ao delineados, destacando-se o papel de cada um deles no desenvolvimento das habilidades necess\'arias para o processo de solu\c{c}\~ao de problemas.
2101.08032
Wanguang Yin
Wanguang Yin, Zhengming Ma, Quanying Liu
Riemannian Manifold Optimization for Discriminant Subspace Learning
13 pages, 4 figures, 6 tables
null
null
null
cs.LG eess.IV eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Linear discriminant analysis (LDA) is a widely used algorithm in machine learning to extract a low-dimensional representation of high-dimensional data, it features to find the orthogonal discriminant projection subspace by using the Fisher discriminant criterion. However, the traditional Euclidean-based methods for solving LDA are easily convergent to spurious local minima and hardly obtain an optimal solution. To address such a problem, in this paper, we propose a novel algorithm namely Riemannian-based discriminant analysis (RDA) for subspace learning. In order to obtain an explicit solution, we transform the traditional Euclidean-based methods to the Riemannian manifold space and use the trust-region method to learn the discriminant projection subspace. We compare the proposed algorithm to existing variants of LDA, as well as the unsupervised tensor decomposition methods on image classification tasks. The numerical results suggest that RDA achieves state-of-the-art performance in classification accuracy.
[ { "created": "Wed, 20 Jan 2021 09:13:34 GMT", "version": "v1" }, { "created": "Tue, 26 Jan 2021 07:17:29 GMT", "version": "v2" }, { "created": "Tue, 20 Jul 2021 02:37:14 GMT", "version": "v3" } ]
2021-07-21
[ [ "Yin", "Wanguang", "" ], [ "Ma", "Zhengming", "" ], [ "Liu", "Quanying", "" ] ]
Linear discriminant analysis (LDA) is a widely used algorithm in machine learning to extract a low-dimensional representation of high-dimensional data, it features to find the orthogonal discriminant projection subspace by using the Fisher discriminant criterion. However, the traditional Euclidean-based methods for solving LDA are easily convergent to spurious local minima and hardly obtain an optimal solution. To address such a problem, in this paper, we propose a novel algorithm namely Riemannian-based discriminant analysis (RDA) for subspace learning. In order to obtain an explicit solution, we transform the traditional Euclidean-based methods to the Riemannian manifold space and use the trust-region method to learn the discriminant projection subspace. We compare the proposed algorithm to existing variants of LDA, as well as the unsupervised tensor decomposition methods on image classification tasks. The numerical results suggest that RDA achieves state-of-the-art performance in classification accuracy.
2404.15385
Alaa Elobaid
Alaa Elobaid, Nathan Ramoly, Lara Younes, Symeon Papadopoulos, Eirini Ntoutsi and Ioannis Kompatsiaris
Sum of Group Error Differences: A Critical Examination of Bias Evaluation in Biometric Verification and a Dual-Metric Measure
null
null
null
null
cs.CV cs.AI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biometric Verification (BV) systems often exhibit accuracy disparities across different demographic groups, leading to biases in BV applications. Assessing and quantifying these biases is essential for ensuring the fairness of BV systems. However, existing bias evaluation metrics in BV have limitations, such as focusing exclusively on match or non-match error rates, overlooking bias on demographic groups with performance levels falling between the best and worst performance levels, and neglecting the magnitude of the bias present. This paper presents an in-depth analysis of the limitations of current bias evaluation metrics in BV and, through experimental analysis, demonstrates their contextual suitability, merits, and limitations. Additionally, it introduces a novel general-purpose bias evaluation measure for BV, the ``Sum of Group Error Differences (SEDG)''. Our experimental results on controlled synthetic datasets demonstrate the effectiveness of demographic bias quantification when using existing metrics and our own proposed measure. We discuss the applicability of the bias evaluation metrics in a set of simulated demographic bias scenarios and provide scenario-based metric recommendations. Our code is publicly available under \url{https://github.com/alaaobeid/SEDG}.
[ { "created": "Tue, 23 Apr 2024 10:59:44 GMT", "version": "v1" } ]
2024-04-25
[ [ "Elobaid", "Alaa", "" ], [ "Ramoly", "Nathan", "" ], [ "Younes", "Lara", "" ], [ "Papadopoulos", "Symeon", "" ], [ "Ntoutsi", "Eirini", "" ], [ "Kompatsiaris", "Ioannis", "" ] ]
Biometric Verification (BV) systems often exhibit accuracy disparities across different demographic groups, leading to biases in BV applications. Assessing and quantifying these biases is essential for ensuring the fairness of BV systems. However, existing bias evaluation metrics in BV have limitations, such as focusing exclusively on match or non-match error rates, overlooking bias on demographic groups with performance levels falling between the best and worst performance levels, and neglecting the magnitude of the bias present. This paper presents an in-depth analysis of the limitations of current bias evaluation metrics in BV and, through experimental analysis, demonstrates their contextual suitability, merits, and limitations. Additionally, it introduces a novel general-purpose bias evaluation measure for BV, the ``Sum of Group Error Differences (SEDG)''. Our experimental results on controlled synthetic datasets demonstrate the effectiveness of demographic bias quantification when using existing metrics and our own proposed measure. We discuss the applicability of the bias evaluation metrics in a set of simulated demographic bias scenarios and provide scenario-based metric recommendations. Our code is publicly available under \url{https://github.com/alaaobeid/SEDG}.
0912.4087
Wei Ren
Wei Ren, Qing Zhao, Ananthram Swami
On the Connectivity and Multihop Delay of Ad Hoc Cognitive Radio Networks
28 pages, 9 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the multihop delay of ad hoc cognitive radio networks, where the transmission delay of each hop consists of the propagation delay and the waiting time for the availability of the communication channel (i.e., the occurrence of a spectrum opportunity at this hop). Using theories and techniques from continuum percolation and ergodicity, we establish the scaling law of the minimum multihop delay with respect to the source-destination distance in cognitive radio networks. When the propagation delay is negligible, we show the starkly different scaling behavior of the minimum multihop delay in instantaneously connected networks as compared to networks that are only intermittently connected due to scarcity of spectrum opportunities. Specifically, if the network is instantaneously connected, the minimum multihop delay is asymptotically independent of the distance; if the network is only intermittently connected, the minimum multihop delay scales linearly with the distance. When the propagation delay is nonnegligible but small, we show that although the scaling order is always linear, the scaling rate for an instantaneously connected network can be orders of magnitude smaller than the one for an intermittently connected network.
[ { "created": "Mon, 21 Dec 2009 06:47:42 GMT", "version": "v1" } ]
2009-12-22
[ [ "Ren", "Wei", "" ], [ "Zhao", "Qing", "" ], [ "Swami", "Ananthram", "" ] ]
We analyze the multihop delay of ad hoc cognitive radio networks, where the transmission delay of each hop consists of the propagation delay and the waiting time for the availability of the communication channel (i.e., the occurrence of a spectrum opportunity at this hop). Using theories and techniques from continuum percolation and ergodicity, we establish the scaling law of the minimum multihop delay with respect to the source-destination distance in cognitive radio networks. When the propagation delay is negligible, we show the starkly different scaling behavior of the minimum multihop delay in instantaneously connected networks as compared to networks that are only intermittently connected due to scarcity of spectrum opportunities. Specifically, if the network is instantaneously connected, the minimum multihop delay is asymptotically independent of the distance; if the network is only intermittently connected, the minimum multihop delay scales linearly with the distance. When the propagation delay is nonnegligible but small, we show that although the scaling order is always linear, the scaling rate for an instantaneously connected network can be orders of magnitude smaller than the one for an intermittently connected network.
2310.12403
Muhammed Fatih Bal{\i}n
Muhammed Fatih Balin, Dominique LaSalle, \"Umit V. \c{C}ataly\"urek
Cooperative Minibatching in Graph Neural Networks
Under submission
null
null
null
cs.LG cs.DC
http://creativecommons.org/licenses/by/4.0/
Significant computational resources are required to train Graph Neural Networks (GNNs) at a large scale, and the process is highly data-intensive. One of the most effective ways to reduce resource requirements is minibatch training coupled with graph sampling. GNNs have the unique property that items in a minibatch have overlapping data. However, the commonly implemented Independent Minibatching approach assigns each Processing Element (PE) its own minibatch to process, leading to duplicated computations and input data access across PEs. This amplifies the Neighborhood Explosion Phenomenon (NEP), which is the main bottleneck limiting scaling. To reduce the effects of NEP in the multi-PE setting, we propose a new approach called Cooperative Minibatching. Our approach capitalizes on the fact that the size of the sampled subgraph is a concave function of the batch size, leading to significant reductions in the amount of work per seed vertex as batch sizes increase. Hence, it is favorable for processors equipped with a fast interconnect to work on a large minibatch together as a single larger processor, instead of working on separate smaller minibatches, even though global batch size is identical. We also show how to take advantage of the same phenomenon in serial execution by generating dependent consecutive minibatches. Our experimental evaluations show up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing this dependency without harming model convergence. Combining our proposed approaches, we achieve up to 64% speedup over Independent Minibatching on single-node multi-GPU systems.
[ { "created": "Thu, 19 Oct 2023 01:15:24 GMT", "version": "v1" }, { "created": "Sun, 22 Oct 2023 02:01:01 GMT", "version": "v2" } ]
2023-10-24
[ [ "Balin", "Muhammed Fatih", "" ], [ "LaSalle", "Dominique", "" ], [ "Çatalyürek", "Ümit V.", "" ] ]
Significant computational resources are required to train Graph Neural Networks (GNNs) at a large scale, and the process is highly data-intensive. One of the most effective ways to reduce resource requirements is minibatch training coupled with graph sampling. GNNs have the unique property that items in a minibatch have overlapping data. However, the commonly implemented Independent Minibatching approach assigns each Processing Element (PE) its own minibatch to process, leading to duplicated computations and input data access across PEs. This amplifies the Neighborhood Explosion Phenomenon (NEP), which is the main bottleneck limiting scaling. To reduce the effects of NEP in the multi-PE setting, we propose a new approach called Cooperative Minibatching. Our approach capitalizes on the fact that the size of the sampled subgraph is a concave function of the batch size, leading to significant reductions in the amount of work per seed vertex as batch sizes increase. Hence, it is favorable for processors equipped with a fast interconnect to work on a large minibatch together as a single larger processor, instead of working on separate smaller minibatches, even though global batch size is identical. We also show how to take advantage of the same phenomenon in serial execution by generating dependent consecutive minibatches. Our experimental evaluations show up to 4x bandwidth savings for fetching vertex embeddings, by simply increasing this dependency without harming model convergence. Combining our proposed approaches, we achieve up to 64% speedup over Independent Minibatching on single-node multi-GPU systems.
2101.02847
Yunjin Zhang
Yunjin Zhang, Rui Wang, Yifan (Evan) Peng, Wei Hua, Hujun Bao
Color Contrast Enhanced Rendering for Optical See-through Head-mounted Displays
13 pages, 22 figures, submitted to TVCG
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
Most commercially available optical see-through head-mounted displays (OST-HMDs) utilize optical combiners to simultaneously visualize the physical background and virtual objects. The displayed images perceived by users are a blend of rendered pixels and background colors. Enabling high fidelity color perception in mixed reality (MR) scenarios using OST-HMDs is an important but challenging task. We propose a real-time rendering scheme to enhance the color contrast between virtual objects and the surrounding background for OST-HMDs. Inspired by the discovery of color perception in psychophysics, we first formulate the color contrast enhancement as a constrained optimization problem. We then design an end-to-end algorithm to search the optimal complementary shift in both chromaticity and luminance of the displayed color. This aims at enhancing the contrast between virtual objects and the real background as well as keeping the consistency with the original color. We assess the performance of our approach using a simulated OST-HMD environment and an off-the-shelf OST-HMD. Experimental results from objective evaluations and subjective user studies demonstrate that the proposed approach makes rendered virtual objects more distinguishable from the surrounding background, thereby bringing a better visual experience.
[ { "created": "Fri, 8 Jan 2021 04:42:39 GMT", "version": "v1" } ]
2021-01-11
[ [ "Zhang", "Yunjin", "", "Evan" ], [ "Wang", "Rui", "", "Evan" ], [ "Yifan", "", "", "Evan" ], [ "Peng", "", "" ], [ "Hua", "Wei", "" ], [ "Bao", "Hujun", "" ] ]
Most commercially available optical see-through head-mounted displays (OST-HMDs) utilize optical combiners to simultaneously visualize the physical background and virtual objects. The displayed images perceived by users are a blend of rendered pixels and background colors. Enabling high fidelity color perception in mixed reality (MR) scenarios using OST-HMDs is an important but challenging task. We propose a real-time rendering scheme to enhance the color contrast between virtual objects and the surrounding background for OST-HMDs. Inspired by the discovery of color perception in psychophysics, we first formulate the color contrast enhancement as a constrained optimization problem. We then design an end-to-end algorithm to search the optimal complementary shift in both chromaticity and luminance of the displayed color. This aims at enhancing the contrast between virtual objects and the real background as well as keeping the consistency with the original color. We assess the performance of our approach using a simulated OST-HMD environment and an off-the-shelf OST-HMD. Experimental results from objective evaluations and subjective user studies demonstrate that the proposed approach makes rendered virtual objects more distinguishable from the surrounding background, thereby bringing a better visual experience.
2106.00329
Zihao Yan
Zihao Yan, Zimu Yi, Ruizhen Hu, Niloy J. Mitra, Daniel Cohen-Or, Hui Huang
Consistent Two-Flow Network for Tele-Registration of Point Clouds
Accepted to IEEE TVCG 2021, project page at https://vcc.tech/research/2021/CTFNet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rigid registration of partial observations is a fundamental problem in various applied fields. In computer graphics, special attention has been given to the registration between two partial point clouds generated by scanning devices. State-of-the-art registration techniques still struggle when the overlap region between the two point clouds is small, and completely fail if there is no overlap between the scan pairs. In this paper, we present a learning-based technique that alleviates this problem, and allows registration between point clouds, presented in arbitrary poses, and having little or even no overlap, a setting that has been referred to as tele-registration. Our technique is based on a novel neural network design that learns a prior of a class of shapes and can complete a partial shape. The key idea is combining the registration and completion tasks in a way that reinforces each other. In particular, we simultaneously train the registration network and completion network using two coupled flows, one that register-and-complete, and one that complete-and-register, and encourage the two flows to produce a consistent result. We show that, compared with each separate flow, this two-flow training leads to robust and reliable tele-registration, and hence to a better point cloud prediction that completes the registered scans. It is also worth mentioning that each of the components in our neural network outperforms state-of-the-art methods in both completion and registration. We further analyze our network with several ablation studies and demonstrate its performance on a large number of partial point clouds, both synthetic and real-world, that have only small or no overlap.
[ { "created": "Tue, 1 Jun 2021 09:03:21 GMT", "version": "v1" }, { "created": "Tue, 20 Jul 2021 09:41:09 GMT", "version": "v2" }, { "created": "Mon, 11 Oct 2021 02:25:04 GMT", "version": "v3" } ]
2021-10-12
[ [ "Yan", "Zihao", "" ], [ "Yi", "Zimu", "" ], [ "Hu", "Ruizhen", "" ], [ "Mitra", "Niloy J.", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Huang", "Hui", "" ] ]
Rigid registration of partial observations is a fundamental problem in various applied fields. In computer graphics, special attention has been given to the registration between two partial point clouds generated by scanning devices. State-of-the-art registration techniques still struggle when the overlap region between the two point clouds is small, and completely fail if there is no overlap between the scan pairs. In this paper, we present a learning-based technique that alleviates this problem, and allows registration between point clouds, presented in arbitrary poses, and having little or even no overlap, a setting that has been referred to as tele-registration. Our technique is based on a novel neural network design that learns a prior of a class of shapes and can complete a partial shape. The key idea is combining the registration and completion tasks in a way that reinforces each other. In particular, we simultaneously train the registration network and completion network using two coupled flows, one that register-and-complete, and one that complete-and-register, and encourage the two flows to produce a consistent result. We show that, compared with each separate flow, this two-flow training leads to robust and reliable tele-registration, and hence to a better point cloud prediction that completes the registered scans. It is also worth mentioning that each of the components in our neural network outperforms state-of-the-art methods in both completion and registration. We further analyze our network with several ablation studies and demonstrate its performance on a large number of partial point clouds, both synthetic and real-world, that have only small or no overlap.
1804.00101
Alan Roytman
Mikkel Abrahamsen, Anna Adamaszek, Karl Bringmann, Vincent Cohen-Addad, Mehran Mehr, Eva Rotenberg, Alan Roytman, Mikkel Thorup
Fast Fencing
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider very natural "fence enclosure" problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set $S$ of $n$ points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose $n$ unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most $k$ closed curves and pay no cost per curve. For the variant with at most $k$ closed curves, we present an algorithm that is polynomial in both $n$ and $k$. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most $k$ curves in $n^{O(k)}$ time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with $k$ curves is NP-hard for general $k$. Our polynomial time algorithm refutes this unless P equals NP.
[ { "created": "Sat, 31 Mar 2018 01:17:15 GMT", "version": "v1" } ]
2018-04-03
[ [ "Abrahamsen", "Mikkel", "" ], [ "Adamaszek", "Anna", "" ], [ "Bringmann", "Karl", "" ], [ "Cohen-Addad", "Vincent", "" ], [ "Mehr", "Mehran", "" ], [ "Rotenberg", "Eva", "" ], [ "Roytman", "Alan", "" ], [ "Thorup", "Mikkel", "" ] ]
We consider very natural "fence enclosure" problems studied by Capoyleas, Rote, and Woeginger and Arkin, Khuller, and Mitchell in the early 90s. Given a set $S$ of $n$ points in the plane, we aim at finding a set of closed curves such that (1) each point is enclosed by a curve and (2) the total length of the curves is minimized. We consider two main variants. In the first variant, we pay a unit cost per curve in addition to the total length of the curves. An equivalent formulation of this version is that we have to enclose $n$ unit disks, paying only the total length of the enclosing curves. In the other variant, we are allowed to use at most $k$ closed curves and pay no cost per curve. For the variant with at most $k$ closed curves, we present an algorithm that is polynomial in both $n$ and $k$. For the variant with unit cost per curve, or unit disks, we present a near-linear time algorithm. Capoyleas, Rote, and Woeginger solved the problem with at most $k$ curves in $n^{O(k)}$ time. Arkin, Khuller, and Mitchell used this to solve the unit cost per curve version in exponential time. At the time, they conjectured that the problem with $k$ curves is NP-hard for general $k$. Our polynomial time algorithm refutes this unless P equals NP.
2402.07639
Nir Weingarten
Nir Weingarten, Zohar Yakhini, Moshe Butman, Ran Gilad-Bachrach
Tighter Bounds on the Information Bottleneck with Application to Deep Learning
10 pages, 5 figures, code included in github repo
null
null
null
cs.LG cs.AI cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Deep Neural Nets (DNNs) learn latent representations induced by their downstream task, objective function, and other parameters. The quality of the learned representations impacts the DNN's generalization ability and the coherence of the emerging latent space. The Information Bottleneck (IB) provides a hypothetically optimal framework for data modeling, yet it is often intractable. Recent efforts combined DNNs with the IB by applying VAE-inspired variational methods to approximate bounds on mutual information, resulting in improved robustness to adversarial attacks. This work introduces a new and tighter variational bound for the IB, improving performance of previous IB-inspired DNNs. These advancements strengthen the case for the IB and its variational approximations as a data modeling framework, and provide a simple method to significantly enhance the adversarial robustness of classifier DNNs.
[ { "created": "Mon, 12 Feb 2024 13:24:32 GMT", "version": "v1" } ]
2024-02-13
[ [ "Weingarten", "Nir", "" ], [ "Yakhini", "Zohar", "" ], [ "Butman", "Moshe", "" ], [ "Gilad-Bachrach", "Ran", "" ] ]
Deep Neural Nets (DNNs) learn latent representations induced by their downstream task, objective function, and other parameters. The quality of the learned representations impacts the DNN's generalization ability and the coherence of the emerging latent space. The Information Bottleneck (IB) provides a hypothetically optimal framework for data modeling, yet it is often intractable. Recent efforts combined DNNs with the IB by applying VAE-inspired variational methods to approximate bounds on mutual information, resulting in improved robustness to adversarial attacks. This work introduces a new and tighter variational bound for the IB, improving performance of previous IB-inspired DNNs. These advancements strengthen the case for the IB and its variational approximations as a data modeling framework, and provide a simple method to significantly enhance the adversarial robustness of classifier DNNs.
2205.03464
Poorna Dasgupta
Poorna Banerjee Dasgupta
Comparative Analysis of Non-Blind Deblurring Methods for Noisy Blurred Images
8 pages, Published with International Journal of Computer Trends and Technology (IJCTT), Volume-70 Issue-3, 2022
null
10.14445/22312803/IJCTT-V70I3P101
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Image blurring refers to the degradation of an image wherein the image's overall sharpness decreases. Image blurring is caused by several factors. Additionally, during the image acquisition process, noise may get added to the image. Such a noisy and blurred image can be represented as the image resulting from the convolution of the original image with the associated point spread function, along with additive noise. However, the blurred image often contains inadequate information to uniquely determine the plausible original image. Based on the availability of blurring information, image deblurring methods can be classified as blind and non-blind. In non-blind image deblurring, some prior information is known regarding the corresponding point spread function and the added noise. The objective of this study is to determine the effectiveness of non-blind image deblurring methods with respect to the identification and elimination of noise present in blurred images. In this study, three non-blind image deblurring methods, namely Wiener deconvolution, Lucy-Richardson deconvolution, and regularized deconvolution were comparatively analyzed for noisy images featuring salt-and-pepper noise. Two types of blurring effects were simulated, namely motion blurring and Gaussian blurring. The said three non-blind deblurring methods were applied under two scenarios: direct deblurring of noisy blurred images and deblurring of images after denoising through the application of the adaptive median filter. The obtained results were then compared for each scenario to determine the best approach for deblurring noisy images.
[ { "created": "Fri, 6 May 2022 20:07:29 GMT", "version": "v1" } ]
2022-05-10
[ [ "Dasgupta", "Poorna Banerjee", "" ] ]
Image blurring refers to the degradation of an image wherein the image's overall sharpness decreases. Image blurring is caused by several factors. Additionally, during the image acquisition process, noise may get added to the image. Such a noisy and blurred image can be represented as the image resulting from the convolution of the original image with the associated point spread function, along with additive noise. However, the blurred image often contains inadequate information to uniquely determine the plausible original image. Based on the availability of blurring information, image deblurring methods can be classified as blind and non-blind. In non-blind image deblurring, some prior information is known regarding the corresponding point spread function and the added noise. The objective of this study is to determine the effectiveness of non-blind image deblurring methods with respect to the identification and elimination of noise present in blurred images. In this study, three non-blind image deblurring methods, namely Wiener deconvolution, Lucy-Richardson deconvolution, and regularized deconvolution were comparatively analyzed for noisy images featuring salt-and-pepper noise. Two types of blurring effects were simulated, namely motion blurring and Gaussian blurring. The said three non-blind deblurring methods were applied under two scenarios: direct deblurring of noisy blurred images and deblurring of images after denoising through the application of the adaptive median filter. The obtained results were then compared for each scenario to determine the best approach for deblurring noisy images.
1209.3061
Sliman Arrag
Sliman Arrag, Abdellatif Hamdoun, Abderrahim Tragha and Salah eddine Khamlich
Design and Implementation A different Architectures of mixcolumn in FPGA
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper details Implementation of the Encryption algorithm AES under VHDL language In FPGA by using different architecture of mixcolumn. We then review this research investigates the AES algorithm in FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). Altera Quartus II software is used for simulation and optimization of the synthesizable VHDL code. The set of transformations of both Encryptions and decryption are simulated using an iterative design approach in order to optimize the hardware consumption. Altera Cyclone III Family devices are utilized for hardware evaluation.
[ { "created": "Thu, 13 Sep 2012 23:08:53 GMT", "version": "v1" } ]
2012-09-17
[ [ "Arrag", "Sliman", "" ], [ "Hamdoun", "Abdellatif", "" ], [ "Tragha", "Abderrahim", "" ], [ "Khamlich", "Salah eddine", "" ] ]
This paper details Implementation of the Encryption algorithm AES under VHDL language In FPGA by using different architecture of mixcolumn. We then review this research investigates the AES algorithm in FPGA and the Very High Speed Integrated Circuit Hardware Description language (VHDL). Altera Quartus II software is used for simulation and optimization of the synthesizable VHDL code. The set of transformations of both Encryptions and decryption are simulated using an iterative design approach in order to optimize the hardware consumption. Altera Cyclone III Family devices are utilized for hardware evaluation.
1706.03311
Kristin Siu
Kristin Siu, Alexander Zook, Mark O. Riedl
A Framework for Exploring and Evaluating Mechanics in Human Computation Games
11 pages, 5 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human computation games (HCGs) are a crowdsourcing approach to solving computationally-intractable tasks using games. In this paper, we describe the need for generalizable HCG design knowledge that accommodates the needs of both players and tasks. We propose a formal representation of the mechanics in HCGs, providing a structural breakdown to visualize, compare, and explore the space of HCG mechanics. We present a methodology based on small-scale design experiments using fixed tasks while varying game elements to observe effects on both the player experience and the human computation task completion. Finally we discuss applications of our framework using comparisons of prior HCGs and recent design experiments. Ultimately, we wish to enable easier exploration and development of HCGs, helping these games provide meaningful player experiences while solving difficult problems.
[ { "created": "Sun, 11 Jun 2017 06:16:49 GMT", "version": "v1" } ]
2017-06-13
[ [ "Siu", "Kristin", "" ], [ "Zook", "Alexander", "" ], [ "Riedl", "Mark O.", "" ] ]
Human computation games (HCGs) are a crowdsourcing approach to solving computationally-intractable tasks using games. In this paper, we describe the need for generalizable HCG design knowledge that accommodates the needs of both players and tasks. We propose a formal representation of the mechanics in HCGs, providing a structural breakdown to visualize, compare, and explore the space of HCG mechanics. We present a methodology based on small-scale design experiments using fixed tasks while varying game elements to observe effects on both the player experience and the human computation task completion. Finally we discuss applications of our framework using comparisons of prior HCGs and recent design experiments. Ultimately, we wish to enable easier exploration and development of HCGs, helping these games provide meaningful player experiences while solving difficult problems.
2006.00064
Scott Schneider
Scott Schneider, Xavier Guerin, Shaohan Hu and Kun-Lung Wu
A Cloud Native Platform for Stateful Streaming
18 pages, 11 figures, submitted to OSDI 2020
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the architecture of a cloud native version of IBM Streams, with Kubernetes as our target platform. Streams is a general purpose streaming system with its own platform for managing applications and the compute clusters that execute those applications. Cloud native Streams replaces that platform with Kubernetes. By using Kubernetes as its platform, Streams is able to offload job management, life cycle tracking, address translation, fault tolerance and scheduling. This offloading is possible because we define custom resources that natively integrate into Kubernetes, allowing Streams to use Kubernetes' eventing system as its own. We use four design patterns to implement our system: controllers, conductors, coordinators and causal chains. Composing controllers, conductors and coordinators allows us to build deterministic state machines out of an asynchronous distributed system. The resulting implementation eliminates 75% of the original platform code. Our experimental results show that the performance of Kubernetes is an adequate replacement in most cases, but it has problems with oversubscription, networking latency, garbage collection and pod recovery.
[ { "created": "Fri, 29 May 2020 20:18:43 GMT", "version": "v1" } ]
2020-06-02
[ [ "Schneider", "Scott", "" ], [ "Guerin", "Xavier", "" ], [ "Hu", "Shaohan", "" ], [ "Wu", "Kun-Lung", "" ] ]
We present the architecture of a cloud native version of IBM Streams, with Kubernetes as our target platform. Streams is a general purpose streaming system with its own platform for managing applications and the compute clusters that execute those applications. Cloud native Streams replaces that platform with Kubernetes. By using Kubernetes as its platform, Streams is able to offload job management, life cycle tracking, address translation, fault tolerance and scheduling. This offloading is possible because we define custom resources that natively integrate into Kubernetes, allowing Streams to use Kubernetes' eventing system as its own. We use four design patterns to implement our system: controllers, conductors, coordinators and causal chains. Composing controllers, conductors and coordinators allows us to build deterministic state machines out of an asynchronous distributed system. The resulting implementation eliminates 75% of the original platform code. Our experimental results show that the performance of Kubernetes is an adequate replacement in most cases, but it has problems with oversubscription, networking latency, garbage collection and pod recovery.
2012.05359
Jaydeep Rade
Jaydeep Rade, Aditya Balu, Ethan Herron, Jay Pathak, Rishikesh Ranade, Soumik Sarkar, Adarsh Krishnamurthy
Algorithmically-Consistent Deep Learning Frameworks for Structural Topology Optimization
29 pages, 28 figures, 9 tables
Engineering Applications of Artificial Intelligence, 2021, Volume 106,104483
10.1016/j.engappai.2021.104483
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Topology optimization has emerged as a popular approach to refine a component's design and increase its performance. However, current state-of-the-art topology optimization frameworks are compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component's performance during the optimization process. Recently, machine learning (ML)-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous ML approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current methods are based on a single ML model for end-to-end prediction, which requires a large dataset for training. These challenges make it non-trivial to extend current approaches to higher resolutions. In this paper, we develop deep learning-based frameworks consistent with traditional topology optimization algorithms for 3D topology optimization with a reasonably fine (high) resolution. We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better (5.76x reduction in total compliance MSE in 2D; 2.03x reduction in total compliance MSE in 3D) than current ML-based topology optimization methods.
[ { "created": "Wed, 9 Dec 2020 23:05:55 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2022 03:50:31 GMT", "version": "v2" } ]
2022-10-27
[ [ "Rade", "Jaydeep", "" ], [ "Balu", "Aditya", "" ], [ "Herron", "Ethan", "" ], [ "Pathak", "Jay", "" ], [ "Ranade", "Rishikesh", "" ], [ "Sarkar", "Soumik", "" ], [ "Krishnamurthy", "Adarsh", "" ] ]
Topology optimization has emerged as a popular approach to refine a component's design and increase its performance. However, current state-of-the-art topology optimization frameworks are compute-intensive, mainly due to multiple finite element analysis iterations required to evaluate the component's performance during the optimization process. Recently, machine learning (ML)-based topology optimization methods have been explored by researchers to alleviate this issue. However, previous ML approaches have mainly been demonstrated on simple two-dimensional applications with low-resolution geometry. Further, current methods are based on a single ML model for end-to-end prediction, which requires a large dataset for training. These challenges make it non-trivial to extend current approaches to higher resolutions. In this paper, we develop deep learning-based frameworks consistent with traditional topology optimization algorithms for 3D topology optimization with a reasonably fine (high) resolution. We achieve this by training multiple networks, each learning a different step of the overall topology optimization methodology, making the framework more consistent with the topology optimization algorithm. We demonstrate the application of our framework on both 2D and 3D geometries. The results show that our approach predicts the final optimized design better (5.76x reduction in total compliance MSE in 2D; 2.03x reduction in total compliance MSE in 3D) than current ML-based topology optimization methods.
1509.05589
Lorenzo Saino
Ioannis Psaras, Konstantinos V. Katsaros, Lorenzo Saino and George Pavlou
LIRA: A Location Independent Routing Layer based on Source-Provided Ephemeral Names
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify the obstacles hindering the deployment of Information Centric Networking (ICN) and the shift from the current IP architecture. In particular, we argue that scalability of name resolution and the lack of control of content access from content providers are two important barriers that keep ICN away from deployment. We design solutions to incentivise ICN deployment and present a new network architecture that incorporates an extra layer in the protocol stack (the Location Independent Routing Layer, LIRA) to integrate location-independent content delivery. According to our design, content names need not (and should not) be permanent, but rather should be ephemeral. Resolution of non-permanent names requires the involvement of content providers, enabling desirable features such as request logging and cache purging, while avoiding the need for the deployment of a new name resolution infrastructure. Our results show that with half of the network's nodes operating under the LIRA framework, we can get the full gain of the ICN mode of operation.
[ { "created": "Fri, 18 Sep 2015 11:12:58 GMT", "version": "v1" } ]
2015-09-21
[ [ "Psaras", "Ioannis", "" ], [ "Katsaros", "Konstantinos V.", "" ], [ "Saino", "Lorenzo", "" ], [ "Pavlou", "George", "" ] ]
We identify the obstacles hindering the deployment of Information Centric Networking (ICN) and the shift from the current IP architecture. In particular, we argue that scalability of name resolution and the lack of control of content access from content providers are two important barriers that keep ICN away from deployment. We design solutions to incentivise ICN deployment and present a new network architecture that incorporates an extra layer in the protocol stack (the Location Independent Routing Layer, LIRA) to integrate location-independent content delivery. According to our design, content names need not (and should not) be permanent, but rather should be ephemeral. Resolution of non-permanent names requires the involvement of content providers, enabling desirable features such as request logging and cache purging, while avoiding the need for the deployment of a new name resolution infrastructure. Our results show that with half of the network's nodes operating under the LIRA framework, we can get the full gain of the ICN mode of operation.
1808.08665
Mehdi Ganji
Mehdi Ganji and Hamid Jafarkhani
Novel Time Asynchronous NOMA schemes for Downlink Transmissions
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we investigate the effect of time asynchrony in non-orthogonal multiple access (NOMA) schemes for downlink transmissions. First, we analyze the benefit of adding intentional timing offsets to the conventional power domain-NOMA (P-NOMA). This method which is called Asynchronous-Power Domain-NOMA (AP-NOMA) introduces artificial symbol-offsets between packets destined for different users. It reduces the mutual interference which results in enlarging the achievable rate-region of the conventional P-NOMA. Then, we propose a precoding scheme which fully exploits the degrees of freedom provided by the time asynchrony. We call this multiple access scheme T-NOMA which provides higher degrees of freedom for users compared to the conventional P-NOMA or even the modified AP-NOMA. T-NOMA adopts a precoding at the base station and a linear preprocessing scheme at the receiving user which decomposes the broadcast channel into parallel channels circumventing the need for Successive Interference Cancellation (SIC). The numerical results show that T-NOMA outperforms AP-NOMA and both outperform the conventional P-NOMA. We also compare the maximum sum-rate and fairness provided by these methods. Moreover, the impact of pulse shape and symbol offset on the performance of AP-NOMA and T-NOMA schemes are investigated.
[ { "created": "Mon, 27 Aug 2018 02:12:03 GMT", "version": "v1" } ]
2018-08-28
[ [ "Ganji", "Mehdi", "" ], [ "Jafarkhani", "Hamid", "" ] ]
In this work, we investigate the effect of time asynchrony in non-orthogonal multiple access (NOMA) schemes for downlink transmissions. First, we analyze the benefit of adding intentional timing offsets to the conventional power domain-NOMA (P-NOMA). This method which is called Asynchronous-Power Domain-NOMA (AP-NOMA) introduces artificial symbol-offsets between packets destined for different users. It reduces the mutual interference which results in enlarging the achievable rate-region of the conventional P-NOMA. Then, we propose a precoding scheme which fully exploits the degrees of freedom provided by the time asynchrony. We call this multiple access scheme T-NOMA which provides higher degrees of freedom for users compared to the conventional P-NOMA or even the modified AP-NOMA. T-NOMA adopts a precoding at the base station and a linear preprocessing scheme at the receiving user which decomposes the broadcast channel into parallel channels circumventing the need for Successive Interference Cancellation (SIC). The numerical results show that T-NOMA outperforms AP-NOMA and both outperform the conventional P-NOMA. We also compare the maximum sum-rate and fairness provided by these methods. Moreover, the impact of pulse shape and symbol offset on the performance of AP-NOMA and T-NOMA schemes are investigated.
1911.05921
Van-Dang Tran
Van-Dang Tran, Hiroyuki Kato, Zhenjiang Hu
Programmable View Update Strategies on Relations
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
View update is an important mechanism that allows updates on a view by translating them into the corresponding updates on the base relations. The existing literature has shown the ambiguity of translating view updates. To address this ambiguity, we propose a robust language-based approach for making view update strategies programmable and validatable. Specifically, we introduce a novel approach to use Datalog to describe these update strategies. We propose a validation algorithm to check the well-behavedness of the written Datalog programs. We present a fragment of the Datalog language for which our validation is both sound and complete. This fragment not only has good properties in theory but is also useful for solving practical view updates. Furthermore, we develop an algorithm for optimizing user-written programs to efficiently implement updatable views in relational database management systems. We have implemented our proposed approach. The experimental results show that our framework is feasible and efficient in practice.
[ { "created": "Thu, 14 Nov 2019 03:40:32 GMT", "version": "v1" }, { "created": "Wed, 22 Jan 2020 04:08:05 GMT", "version": "v2" }, { "created": "Mon, 31 Aug 2020 16:07:10 GMT", "version": "v3" } ]
2020-09-01
[ [ "Tran", "Van-Dang", "" ], [ "Kato", "Hiroyuki", "" ], [ "Hu", "Zhenjiang", "" ] ]
View update is an important mechanism that allows updates on a view by translating them into the corresponding updates on the base relations. The existing literature has shown the ambiguity of translating view updates. To address this ambiguity, we propose a robust language-based approach for making view update strategies programmable and validatable. Specifically, we introduce a novel approach to use Datalog to describe these update strategies. We propose a validation algorithm to check the well-behavedness of the written Datalog programs. We present a fragment of the Datalog language for which our validation is both sound and complete. This fragment not only has good properties in theory but is also useful for solving practical view updates. Furthermore, we develop an algorithm for optimizing user-written programs to efficiently implement updatable views in relational database management systems. We have implemented our proposed approach. The experimental results show that our framework is feasible and efficient in practice.
2206.12795
Lloyd Allison
Lloyd Allison
Applications of Recursively Defined Data Structures
The paper originally appeared in the Australian Computer Journal (ISSN 0004-8917). The journal was published by the Australian Computer Society from 1967 to 1999
Australian Computer Journal, 25(1):14-20,February 1993
null
null
cs.DS cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
A circular program contains a data structure whose definition is self-referential or recursive. The use of such a definition allows efficient functional programs to be written and can avoid repeated evaluations and the creation of intermediate data structures that would have to be garbage collected. This paper uses circular programs in various ways, to implement memo-structures and explicit search-trees to hold solutions to constraint-satisfaction problems.
[ { "created": "Sun, 26 Jun 2022 06:02:06 GMT", "version": "v1" } ]
2022-06-28
[ [ "Allison", "Lloyd", "" ] ]
A circular program contains a data structure whose definition is self-referential or recursive. The use of such a definition allows efficient functional programs to be written and can avoid repeated evaluations and the creation of intermediate data structures that would have to be garbage collected. This paper uses circular programs in various ways, to implement memo-structures and explicit search-trees to hold solutions to constraint-satisfaction problems.
2309.07974
Jack Lanchantin
Jack Lanchantin, Sainbayar Sukhbaatar, Gabriel Synnaeve, Yuxuan Sun, Kavya Srinet, Arthur Szlam
A Data Source for Reasoning Embodied Agents
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent progress in using machine learning models for reasoning tasks has been driven by novel model architectures, large-scale pre-training protocols, and dedicated reasoning datasets for fine-tuning. In this work, to further pursue these advances, we introduce a new data generator for machine reasoning that integrates with an embodied agent. The generated data consists of templated text queries and answers, matched with world-states encoded into a database. The world-states are a result of both world dynamics and the actions of the agent. We show the results of several baseline models on instantiations of train sets. These include pre-trained language models fine-tuned on a text-formatted representation of the database, and graph-structured Transformers operating on a knowledge-graph representation of the database. We find that these models can answer some questions about the world-state, but struggle with others. These results hint at new research directions in designing neural reasoning models and database representations. Code to generate the data will be released at github.com/facebookresearch/neuralmemory
[ { "created": "Thu, 14 Sep 2023 18:17:16 GMT", "version": "v1" } ]
2023-09-18
[ [ "Lanchantin", "Jack", "" ], [ "Sukhbaatar", "Sainbayar", "" ], [ "Synnaeve", "Gabriel", "" ], [ "Sun", "Yuxuan", "" ], [ "Srinet", "Kavya", "" ], [ "Szlam", "Arthur", "" ] ]
Recent progress in using machine learning models for reasoning tasks has been driven by novel model architectures, large-scale pre-training protocols, and dedicated reasoning datasets for fine-tuning. In this work, to further pursue these advances, we introduce a new data generator for machine reasoning that integrates with an embodied agent. The generated data consists of templated text queries and answers, matched with world-states encoded into a database. The world-states are a result of both world dynamics and the actions of the agent. We show the results of several baseline models on instantiations of train sets. These include pre-trained language models fine-tuned on a text-formatted representation of the database, and graph-structured Transformers operating on a knowledge-graph representation of the database. We find that these models can answer some questions about the world-state, but struggle with others. These results hint at new research directions in designing neural reasoning models and database representations. Code to generate the data will be released at github.com/facebookresearch/neuralmemory
2308.13074
Srivathsan Gnanasekaran Morkonda
Srivathsan G. Morkonda, Sonia Chiasson, Paul C. van Oorschot
Influences of Displaying Permission-related Information on Web Single Sign-On Login Decisions
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web users are increasingly presented with multiple login options, including password-based login and common web single sign-on (SSO) login options such as "Login with Google" and "Login with Facebook". There has been little focus in previous studies on how users choose from a list of login options and how to better inform users about privacy issues in web SSO systems. In this paper, we conducted a 200-participant study to understand factors that influence participants' login decisions, and how they are affected by displaying permission differences across login options; permissions in SSO result in release of user personal information to third-party web sites through SSO identity providers. We compare and report on login decisions made by participants before and after viewing permission-related information, examine self-reported responses for reasons related to their login decisions, and report on the factors that motivated their choices. We find that usability preferences and inertia (habituation) were among the dominant factors influencing login decisions. After participants viewed permission-related information, many prioritised privacy over other factors, changing their login decisions to more privacy-friendly alternatives. Displaying permission-related information also influenced some participants to make tradeoffs between privacy and usability preferences.
[ { "created": "Thu, 24 Aug 2023 20:35:09 GMT", "version": "v1" }, { "created": "Thu, 28 Dec 2023 18:30:36 GMT", "version": "v2" } ]
2023-12-29
[ [ "Morkonda", "Srivathsan G.", "" ], [ "Chiasson", "Sonia", "" ], [ "van Oorschot", "Paul C.", "" ] ]
Web users are increasingly presented with multiple login options, including password-based login and common web single sign-on (SSO) login options such as "Login with Google" and "Login with Facebook". There has been little focus in previous studies on how users choose from a list of login options and how to better inform users about privacy issues in web SSO systems. In this paper, we conducted a 200-participant study to understand factors that influence participants' login decisions, and how they are affected by displaying permission differences across login options; permissions in SSO result in release of user personal information to third-party web sites through SSO identity providers. We compare and report on login decisions made by participants before and after viewing permission-related information, examine self-reported responses for reasons related to their login decisions, and report on the factors that motivated their choices. We find that usability preferences and inertia (habituation) were among the dominant factors influencing login decisions. After participants viewed permission-related information, many prioritised privacy over other factors, changing their login decisions to more privacy-friendly alternatives. Displaying permission-related information also influenced some participants to make tradeoffs between privacy and usability preferences.
1304.1128
Robert Fung
Robert Fung, S. L. Crawford, Lee A. Appelbaum, Richard M. Tong
An Architecture for Probabilistic Concept-Based Information Retrieval
Appears in Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence (UAI1990)
null
null
UAI-P-1990-PG-392-404
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While concept-based methods for information retrieval can provide improved performance over more conventional techniques, they require large amounts of effort to acquire the concepts and their qualitative and quantitative relationships. This paper discusses an architecture for probabilistic concept-based information retrieval which addresses the knowledge acquisition problem. The architecture makes use of the probabilistic networks technology for representing and reasoning about concepts and includes a knowledge acquisition component which partially automates the construction of concept knowledge bases from data. We describe two experiments that apply the architecture to the task of retrieving documents about terrorism from a set of documents from the Reuters news service. The experiments provide positive evidence that the architecture design is feasible and that there are advantages to concept-based methods.
[ { "created": "Wed, 27 Mar 2013 13:58:58 GMT", "version": "v1" } ]
2013-04-05
[ [ "Fung", "Robert", "" ], [ "Crawford", "S. L.", "" ], [ "Appelbaum", "Lee A.", "" ], [ "Tong", "Richard M.", "" ] ]
While concept-based methods for information retrieval can provide improved performance over more conventional techniques, they require large amounts of effort to acquire the concepts and their qualitative and quantitative relationships. This paper discusses an architecture for probabilistic concept-based information retrieval which addresses the knowledge acquisition problem. The architecture makes use of the probabilistic networks technology for representing and reasoning about concepts and includes a knowledge acquisition component which partially automates the construction of concept knowledge bases from data. We describe two experiments that apply the architecture to the task of retrieving documents about terrorism from a set of documents from the Reuters news service. The experiments provide positive evidence that the architecture design is feasible and that there are advantages to concept-based methods.
2306.12310
Pranauv Aj
Niketha Sabesan, Nivethitha, J.N Shreyah, Pranauv A J, Shyam R
Medical ministrations through web scraping
null
null
null
null
cs.CL
http://creativecommons.org/publicdomain/zero/1.0/
Web scraping is a technique that allows us to extract data from websites automatically. in the field of medicine, web scraping can be used to collect information about medical procedures, treatments, and healthcare providers. this information can be used to improve patient care, monitor the quality of healthcare services, and identify areas for improvement. one area where web scraping can be particularly useful is in medical ministrations. medical ministrations are the actions taken to provide medical care to patients, and web scraping can help healthcare providers identify the most effective ministrations for their patients. for example, healthcare providers can use web scraping to collect data about the symptoms and medical histories of their patients, and then use this information to determine the most appropriate ministrations. they can also use web scraping to gather information about the latest medical research and clinical trials, which can help them stay up-to-date with the latest treatments and procedures.
[ { "created": "Wed, 21 Jun 2023 14:43:25 GMT", "version": "v1" } ]
2023-06-22
[ [ "Sabesan", "Niketha", "" ], [ "Nivethitha", "", "" ], [ "Shreyah", "J. N", "" ], [ "J", "Pranauv A", "" ], [ "R", "Shyam", "" ] ]
Web scraping is a technique that allows us to extract data from websites automatically. in the field of medicine, web scraping can be used to collect information about medical procedures, treatments, and healthcare providers. this information can be used to improve patient care, monitor the quality of healthcare services, and identify areas for improvement. one area where web scraping can be particularly useful is in medical ministrations. medical ministrations are the actions taken to provide medical care to patients, and web scraping can help healthcare providers identify the most effective ministrations for their patients. for example, healthcare providers can use web scraping to collect data about the symptoms and medical histories of their patients, and then use this information to determine the most appropriate ministrations. they can also use web scraping to gather information about the latest medical research and clinical trials, which can help them stay up-to-date with the latest treatments and procedures.
2312.12908
Pau Torras
Pau Torras and Sanket Biswas and Alicia Forn\'es
The Common Optical Music Recognition Evaluation Framework
18 pages, 4 figures, 3 tables, submitted (under review) for the International Journal in Document Analysis and Recognition
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The quality of Optical Music Recognition (OMR) systems is a rather difficult magnitude to measure. There is no lingua franca shared among OMR datasets that allows to compare systems' performance on equal grounds, since most of them are specialised on certain approaches. As a result, most state-of-the-art works currently report metrics that cannot be compared directly. In this paper we identify the need of a common music representation language and propose the Music Tree Notation (MTN) format, thanks to which the definition of standard metrics is possible. This format represents music as a set of primitives that group together into higher-abstraction nodes, a compromise between the expression of fully graph-based and sequential notation formats. We have also developed a specific set of OMR metrics and a typeset score dataset as a proof of concept of this idea.
[ { "created": "Wed, 20 Dec 2023 10:45:22 GMT", "version": "v1" } ]
2023-12-21
[ [ "Torras", "Pau", "" ], [ "Biswas", "Sanket", "" ], [ "Fornés", "Alicia", "" ] ]
The quality of Optical Music Recognition (OMR) systems is a rather difficult magnitude to measure. There is no lingua franca shared among OMR datasets that allows to compare systems' performance on equal grounds, since most of them are specialised on certain approaches. As a result, most state-of-the-art works currently report metrics that cannot be compared directly. In this paper we identify the need of a common music representation language and propose the Music Tree Notation (MTN) format, thanks to which the definition of standard metrics is possible. This format represents music as a set of primitives that group together into higher-abstraction nodes, a compromise between the expression of fully graph-based and sequential notation formats. We have also developed a specific set of OMR metrics and a typeset score dataset as a proof of concept of this idea.
2004.00865
Timotej Ga\v{s}par
Timotej Ga\v{s}par, Miha Deni\v{s}a and Ale\v{s} Ude
A reconfigurable robot workcell for quick set-up of assembly processes
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High volume production has been a prerequisite in order to invest into automation of the manufacturing process for decades. The high cost of setup and the inflexibility of classical automation meant that low batch productions, often present in Small and Medium-sized Enterprises (SMEs), were dismissed as potential end user of automation technologies. In this extended abstract we present the results of the ReconCell project whose objective was to develop a new type of highly reconfigurable robot workcell for fast set-up of automated assembly processes in SMEs. The high degree of reconfigurability was achieved by the developed reconfigurable hardware and the complementary reconfigurable software, while fast set-up was achieved with technologies for fast robot programming.
[ { "created": "Thu, 2 Apr 2020 08:26:23 GMT", "version": "v1" } ]
2020-04-03
[ [ "Gašpar", "Timotej", "" ], [ "Deniša", "Miha", "" ], [ "Ude", "Aleš", "" ] ]
High volume production has been a prerequisite in order to invest into automation of the manufacturing process for decades. The high cost of setup and the inflexibility of classical automation meant that low batch productions, often present in Small and Medium-sized Enterprises (SMEs), were dismissed as potential end user of automation technologies. In this extended abstract we present the results of the ReconCell project whose objective was to develop a new type of highly reconfigurable robot workcell for fast set-up of automated assembly processes in SMEs. The high degree of reconfigurability was achieved by the developed reconfigurable hardware and the complementary reconfigurable software, while fast set-up was achieved with technologies for fast robot programming.
1309.4616
Lukas Einkemmer
Lukas Einkemmer and Alexander Ostermann
Exponential Integrators on Graphic Processing Units
To appear in: Proceedings of the 2013 International Conference on High Performance Computing Simulation (HPCS 2013), IEEE (2013)
High Performance Computing and Simulation (HPCS), 2013 International Conference on, pp. 490-496
10.1109/HPCSim.2013.6641458
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we revisit stencil methods on GPUs in the context of exponential integrators. We further discuss boundary conditions, in the same context, and show that simple boundary conditions (for example, homogeneous Dirichlet or homogeneous Neumann boundary conditions) do not affect the performance if implemented directly into the CUDA kernel. In addition, we show that stencil methods with position-dependent coefficients can be implemented efficiently as well. As an application, we discuss the implementation of exponential integrators for different classes of problems in a single and multi GPU setup (up to 4 GPUs). We further show that for stencil based methods such parallelization can be done very efficiently, while for some unstructured matrices the parallelization to multiple GPUs is severely limited by the throughput of the PCIe bus.
[ { "created": "Wed, 18 Sep 2013 11:21:05 GMT", "version": "v1" } ]
2014-05-27
[ [ "Einkemmer", "Lukas", "" ], [ "Ostermann", "Alexander", "" ] ]
In this paper we revisit stencil methods on GPUs in the context of exponential integrators. We further discuss boundary conditions, in the same context, and show that simple boundary conditions (for example, homogeneous Dirichlet or homogeneous Neumann boundary conditions) do not affect the performance if implemented directly into the CUDA kernel. In addition, we show that stencil methods with position-dependent coefficients can be implemented efficiently as well. As an application, we discuss the implementation of exponential integrators for different classes of problems in a single and multi GPU setup (up to 4 GPUs). We further show that for stencil based methods such parallelization can be done very efficiently, while for some unstructured matrices the parallelization to multiple GPUs is severely limited by the throughput of the PCIe bus.
1910.05291
Serhii Havrylov
Shangmin Guo, Yi Ren, Serhii Havrylov, Stella Frank, Ivan Titov, Kenny Smith
The Emergence of Compositional Languages for Numeric Concepts Through Iterated Learning in Neural Agents
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since first introduced, computer simulation has been an increasingly important tool in evolutionary linguistics. Recently, with the development of deep learning techniques, research in grounded language learning has also started to focus on facilitating the emergence of compositional languages without pre-defined elementary linguistic knowledge. In this work, we explore the emergence of compositional languages for numeric concepts in multi-agent communication systems. We demonstrate that compositional language for encoding numeric concepts can emerge through iterated learning in populations of deep neural network agents. However, language properties greatly depend on the input representations given to agents. We found that compositional languages only emerge if they require less iterations to be fully learnt than other non-degenerate languages for agents on a given input representation.
[ { "created": "Fri, 11 Oct 2019 16:34:01 GMT", "version": "v1" } ]
2019-10-14
[ [ "Guo", "Shangmin", "" ], [ "Ren", "Yi", "" ], [ "Havrylov", "Serhii", "" ], [ "Frank", "Stella", "" ], [ "Titov", "Ivan", "" ], [ "Smith", "Kenny", "" ] ]
Since first introduced, computer simulation has been an increasingly important tool in evolutionary linguistics. Recently, with the development of deep learning techniques, research in grounded language learning has also started to focus on facilitating the emergence of compositional languages without pre-defined elementary linguistic knowledge. In this work, we explore the emergence of compositional languages for numeric concepts in multi-agent communication systems. We demonstrate that compositional language for encoding numeric concepts can emerge through iterated learning in populations of deep neural network agents. However, language properties greatly depend on the input representations given to agents. We found that compositional languages only emerge if they require less iterations to be fully learnt than other non-degenerate languages for agents on a given input representation.
2205.12443
Kaiyu Yang
Kaiyu Yang and Jia Deng and Danqi Chen
Generating Natural Language Proofs with Verifier-Guided Search
EMNLP 2022. Code and models are available at https://github.com/princeton-nlp/NLProofS. v3 added evaluation of GPT-3 and Codex
null
null
null
cs.CL cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reasoning over natural language is a challenging problem in NLP. In this work, we focus on proof generation: Given a hypothesis and a set of supporting facts, the model generates a proof tree indicating how to derive the hypothesis from supporting facts. Compared to generating the entire proof in one shot, stepwise generation can better exploit the compositionality and generalize to longer proofs but has achieved limited success on real-world data. Existing stepwise methods struggle to generate proof steps that are both logically valid and relevant to the hypothesis. Instead, they tend to hallucinate invalid steps given the hypothesis. In this paper, we present a novel stepwise method, NLProofS (Natural Language Proof Search), which learns to generate relevant steps conditioning on the hypothesis. At the core of our approach, we train an independent verifier to check the validity of the proof steps to prevent hallucination. Instead of generating steps greedily, we search for proofs maximizing a global proof score judged by the verifier. NLProofS achieves state-of-the-art performance on EntailmentBank and RuleTaker. Specifically, it improves the correctness of predicted proofs from 27.7% to 33.3% in the distractor setting of EntailmentBank, demonstrating the effectiveness of NLProofS in generating challenging human-authored proofs.
[ { "created": "Wed, 25 May 2022 02:22:30 GMT", "version": "v1" }, { "created": "Tue, 18 Oct 2022 17:33:26 GMT", "version": "v2" }, { "created": "Fri, 21 Oct 2022 20:08:11 GMT", "version": "v3" } ]
2022-10-25
[ [ "Yang", "Kaiyu", "" ], [ "Deng", "Jia", "" ], [ "Chen", "Danqi", "" ] ]
Reasoning over natural language is a challenging problem in NLP. In this work, we focus on proof generation: Given a hypothesis and a set of supporting facts, the model generates a proof tree indicating how to derive the hypothesis from supporting facts. Compared to generating the entire proof in one shot, stepwise generation can better exploit the compositionality and generalize to longer proofs but has achieved limited success on real-world data. Existing stepwise methods struggle to generate proof steps that are both logically valid and relevant to the hypothesis. Instead, they tend to hallucinate invalid steps given the hypothesis. In this paper, we present a novel stepwise method, NLProofS (Natural Language Proof Search), which learns to generate relevant steps conditioning on the hypothesis. At the core of our approach, we train an independent verifier to check the validity of the proof steps to prevent hallucination. Instead of generating steps greedily, we search for proofs maximizing a global proof score judged by the verifier. NLProofS achieves state-of-the-art performance on EntailmentBank and RuleTaker. Specifically, it improves the correctness of predicted proofs from 27.7% to 33.3% in the distractor setting of EntailmentBank, demonstrating the effectiveness of NLProofS in generating challenging human-authored proofs.
0906.4618
Pedro Peris-Lopez
Pedro Peris-Lopez, Julio C. Hernandez-Castro, Christos Dimitrakakis, Aikaterini Mitrokotsa, Juan M. E. Tapiador
Shedding Light on RFID Distance Bounding Protocols and Terrorist Fraud Attacks
31 pages, 10 figures, 1 table
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vast majority of RFID authentication protocols assume the proximity between readers and tags due to the limited range of the radio channel. However, in real scenarios an intruder can be located between the prover (tag) and the verifier (reader) and trick this last one into thinking that the prover is in close proximity. This attack is generally known as a relay attack in which scope distance fraud, mafia fraud and terrorist attacks are included. Distance bounding protocols represent a promising countermeasure to hinder relay attacks. Several protocols have been proposed during the last years but vulnerabilities of major or minor relevance have been identified in most of them. In 2008, Kim et al. [1] proposed a new distance bounding protocol with the objective of being the best in terms of security, privacy, tag computational overhead and fault tolerance. In this paper, we analyze this protocol and we present a passive full disclosure attack, which allows an adversary to discover the long-term secret key of the tag. The presented attack is very relevant, since no security objectives are met in Kim et al.'s protocol. Then, design guidelines are introduced with the aim of facilitating protocol designers the stimulating task of designing secure and efficient schemes against relay attacks. Finally a new protocol, named Hitomi and inspired by [1], is designed conforming the guidelines proposed previously.
[ { "created": "Thu, 25 Jun 2009 07:12:26 GMT", "version": "v1" }, { "created": "Sun, 20 Jun 2010 19:35:19 GMT", "version": "v2" } ]
2010-06-22
[ [ "Peris-Lopez", "Pedro", "" ], [ "Hernandez-Castro", "Julio C.", "" ], [ "Dimitrakakis", "Christos", "" ], [ "Mitrokotsa", "Aikaterini", "" ], [ "Tapiador", "Juan M. E.", "" ] ]
The vast majority of RFID authentication protocols assume the proximity between readers and tags due to the limited range of the radio channel. However, in real scenarios an intruder can be located between the prover (tag) and the verifier (reader) and trick this last one into thinking that the prover is in close proximity. This attack is generally known as a relay attack in which scope distance fraud, mafia fraud and terrorist attacks are included. Distance bounding protocols represent a promising countermeasure to hinder relay attacks. Several protocols have been proposed during the last years but vulnerabilities of major or minor relevance have been identified in most of them. In 2008, Kim et al. [1] proposed a new distance bounding protocol with the objective of being the best in terms of security, privacy, tag computational overhead and fault tolerance. In this paper, we analyze this protocol and we present a passive full disclosure attack, which allows an adversary to discover the long-term secret key of the tag. The presented attack is very relevant, since no security objectives are met in Kim et al.'s protocol. Then, design guidelines are introduced with the aim of facilitating protocol designers the stimulating task of designing secure and efficient schemes against relay attacks. Finally a new protocol, named Hitomi and inspired by [1], is designed conforming the guidelines proposed previously.
2010.01385
Purnata Ghosal
Purnata Ghosal and B. V. Raghavendra Rao
Limitations of Sums of Bounded-Read Formulas
20 pages, 3 figures
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proving super polynomial size lower bounds for various classes of arithmetic circuits computing explicit polynomials is a very important and challenging task in algebraic complexity theory. We study representation of polynomials as sums of weaker models such as read once formulas (ROFs) and read once oblivious algebraic branching programs (ROABPs). We prove: (1) An exponential separation between sum of ROFs and read-$k$ formulas for some constant $k$. (2) A sub-exponential separation between sum of ROABPs and syntactic multilinear ABPs. Our results are based on analysis of the partial derivative matrix under different distributions. These results highlight richness of bounded read restrictions in arithmetic formulas and ABPs. Finally, we consider a generalization of multilinear ROABPs known as strict-interval ABPs defined in [Ramya-Rao, MFCS2019]. We show that strict-interval ABPs are equivalent to ROABPs upto a polynomial size blow up. In contrast, we show that interval formulas are different from ROFs and also admit depth reduction which is not known in the case of strict-interval ABPs.
[ { "created": "Sat, 3 Oct 2020 16:41:22 GMT", "version": "v1" } ]
2020-10-06
[ [ "Ghosal", "Purnata", "" ], [ "Rao", "B. V. Raghavendra", "" ] ]
Proving super polynomial size lower bounds for various classes of arithmetic circuits computing explicit polynomials is a very important and challenging task in algebraic complexity theory. We study representation of polynomials as sums of weaker models such as read once formulas (ROFs) and read once oblivious algebraic branching programs (ROABPs). We prove: (1) An exponential separation between sum of ROFs and read-$k$ formulas for some constant $k$. (2) A sub-exponential separation between sum of ROABPs and syntactic multilinear ABPs. Our results are based on analysis of the partial derivative matrix under different distributions. These results highlight richness of bounded read restrictions in arithmetic formulas and ABPs. Finally, we consider a generalization of multilinear ROABPs known as strict-interval ABPs defined in [Ramya-Rao, MFCS2019]. We show that strict-interval ABPs are equivalent to ROABPs upto a polynomial size blow up. In contrast, we show that interval formulas are different from ROFs and also admit depth reduction which is not known in the case of strict-interval ABPs.
1605.03269
Junpei Zhong
Junpei Zhong and Rony Novianto and Mingjun Dai and Xinzheng Zhang and Angelo Cangelosi
A Hierarchical Emotion Regulated Sensorimotor Model: Case Studies
Accepted at The 5th International Conference on Data-Driven Control and Learning Systems. 2016
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the hierarchical cognitive architecture and the perception-action model (PAM), we propose that the internal status acts as a kind of common-coding representation which affects, mediates and even regulates the sensorimotor behaviours. These regulation can be depicted in the Bayesian framework, that is why cognitive agents are able to generate behaviours with subtle differences according to their emotion or recognize the emotion by perception. A novel recurrent neural network called recurrent neural network with parametric bias units (RNNPB) runs in three modes, constructing a two-level emotion regulated learning model, was further applied to testify this theory in two different cases.
[ { "created": "Wed, 11 May 2016 03:22:13 GMT", "version": "v1" } ]
2016-05-12
[ [ "Zhong", "Junpei", "" ], [ "Novianto", "Rony", "" ], [ "Dai", "Mingjun", "" ], [ "Zhang", "Xinzheng", "" ], [ "Cangelosi", "Angelo", "" ] ]
Inspired by the hierarchical cognitive architecture and the perception-action model (PAM), we propose that the internal status acts as a kind of common-coding representation which affects, mediates and even regulates the sensorimotor behaviours. These regulation can be depicted in the Bayesian framework, that is why cognitive agents are able to generate behaviours with subtle differences according to their emotion or recognize the emotion by perception. A novel recurrent neural network called recurrent neural network with parametric bias units (RNNPB) runs in three modes, constructing a two-level emotion regulated learning model, was further applied to testify this theory in two different cases.
1211.3666
Shuang Li
Shuang Li, Zizhan Zheng, Eylem Ekici and Ness B. Shroff
Maximizing System Throughput Using Cooperative Sensing in Multi-Channel Cognitive Radio Networks
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Cognitive Radio Networks (CRNs), unlicensed users are allowed to access the licensed spectrum when it is not currently being used by primary users (PUs). In this paper, we study the throughput maximization problem for a multi-channel CRN where each SU can only sense a limited number of channels. We show that this problem is strongly NP-hard, and propose an approximation algorithm with a factor at least $1/2\mu$ where $\mu \in [1,2]$ is a system parameter reflecting the sensing capability of SUs across channels and their sensing budgets. This performance guarantee is achieved by exploiting a nice structural property of the objective function and constructing a particular matching. Our numerical results demonstrate the advantage of our algorithm compared with both a random and a greedy sensing assignment algorithm.
[ { "created": "Thu, 15 Nov 2012 17:22:33 GMT", "version": "v1" } ]
2012-11-16
[ [ "Li", "Shuang", "" ], [ "Zheng", "Zizhan", "" ], [ "Ekici", "Eylem", "" ], [ "Shroff", "Ness B.", "" ] ]
In Cognitive Radio Networks (CRNs), unlicensed users are allowed to access the licensed spectrum when it is not currently being used by primary users (PUs). In this paper, we study the throughput maximization problem for a multi-channel CRN where each SU can only sense a limited number of channels. We show that this problem is strongly NP-hard, and propose an approximation algorithm with a factor at least $1/2\mu$ where $\mu \in [1,2]$ is a system parameter reflecting the sensing capability of SUs across channels and their sensing budgets. This performance guarantee is achieved by exploiting a nice structural property of the objective function and constructing a particular matching. Our numerical results demonstrate the advantage of our algorithm compared with both a random and a greedy sensing assignment algorithm.
2302.05442
Mostafa Dehghani
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Paveti\'c, Dustin Tran, Thomas Kipf, Mario Lu\v{c}i\'c, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, Neil Houlsby
Scaling Vision Transformers to 22 Billion Parameters
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters. Vision Transformers (ViT) have introduced the same architecture to image and video modelling, but these have not yet been successfully scaled to nearly the same degree; the largest dense ViT contains 4B parameters (Chen et al., 2022). We present a recipe for highly efficient and stable training of a 22B-parameter ViT (ViT-22B) and perform a wide variety of experiments on the resulting model. When evaluated on downstream tasks (often with a lightweight linear model on frozen features), ViT-22B demonstrates increasing performance with scale. We further observe other interesting benefits of scale, including an improved tradeoff between fairness and performance, state-of-the-art alignment to human visual perception in terms of shape/texture bias, and improved robustness. ViT-22B demonstrates the potential for "LLM-like" scaling in vision, and provides key steps towards getting there.
[ { "created": "Fri, 10 Feb 2023 18:58:21 GMT", "version": "v1" } ]
2023-02-13
[ [ "Dehghani", "Mostafa", "" ], [ "Djolonga", "Josip", "" ], [ "Mustafa", "Basil", "" ], [ "Padlewski", "Piotr", "" ], [ "Heek", "Jonathan", "" ], [ "Gilmer", "Justin", "" ], [ "Steiner", "Andreas", "" ], [ "Caron", "Mathilde", "" ], [ "Geirhos", "Robert", "" ], [ "Alabdulmohsin", "Ibrahim", "" ], [ "Jenatton", "Rodolphe", "" ], [ "Beyer", "Lucas", "" ], [ "Tschannen", "Michael", "" ], [ "Arnab", "Anurag", "" ], [ "Wang", "Xiao", "" ], [ "Riquelme", "Carlos", "" ], [ "Minderer", "Matthias", "" ], [ "Puigcerver", "Joan", "" ], [ "Evci", "Utku", "" ], [ "Kumar", "Manoj", "" ], [ "van Steenkiste", "Sjoerd", "" ], [ "Elsayed", "Gamaleldin F.", "" ], [ "Mahendran", "Aravindh", "" ], [ "Yu", "Fisher", "" ], [ "Oliver", "Avital", "" ], [ "Huot", "Fantine", "" ], [ "Bastings", "Jasmijn", "" ], [ "Collier", "Mark Patrick", "" ], [ "Gritsenko", "Alexey", "" ], [ "Birodkar", "Vighnesh", "" ], [ "Vasconcelos", "Cristina", "" ], [ "Tay", "Yi", "" ], [ "Mensink", "Thomas", "" ], [ "Kolesnikov", "Alexander", "" ], [ "Pavetić", "Filip", "" ], [ "Tran", "Dustin", "" ], [ "Kipf", "Thomas", "" ], [ "Lučić", "Mario", "" ], [ "Zhai", "Xiaohua", "" ], [ "Keysers", "Daniel", "" ], [ "Harmsen", "Jeremiah", "" ], [ "Houlsby", "Neil", "" ] ]
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters. Vision Transformers (ViT) have introduced the same architecture to image and video modelling, but these have not yet been successfully scaled to nearly the same degree; the largest dense ViT contains 4B parameters (Chen et al., 2022). We present a recipe for highly efficient and stable training of a 22B-parameter ViT (ViT-22B) and perform a wide variety of experiments on the resulting model. When evaluated on downstream tasks (often with a lightweight linear model on frozen features), ViT-22B demonstrates increasing performance with scale. We further observe other interesting benefits of scale, including an improved tradeoff between fairness and performance, state-of-the-art alignment to human visual perception in terms of shape/texture bias, and improved robustness. ViT-22B demonstrates the potential for "LLM-like" scaling in vision, and provides key steps towards getting there.
2104.03841
Daniela Massiceti
Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf
ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition
IEEE/CVF International Conference on Computer Vision (ICCV), 2021
null
10.25383/city.14294597
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Object recognition has made great advances in the last decade, but predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset and benchmark, grounded in the real-world application of teachable object recognizers for people who are blind/low-vision. The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones. The benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions. We set the benchmark's first state-of-the-art and show there is massive scope for further innovation, holding the potential to impact a broad range of real-world vision applications including tools for the blind/low-vision community. We release the dataset at https://doi.org/10.25383/city.14294597 and benchmark code at https://github.com/microsoft/ORBIT-Dataset.
[ { "created": "Thu, 8 Apr 2021 15:32:01 GMT", "version": "v1" }, { "created": "Fri, 9 Apr 2021 16:56:43 GMT", "version": "v2" }, { "created": "Thu, 10 Jun 2021 14:50:34 GMT", "version": "v3" }, { "created": "Mon, 16 Aug 2021 16:19:12 GMT", "version": "v4" }, { "created": "Fri, 8 Oct 2021 13:20:52 GMT", "version": "v5" } ]
2021-10-11
[ [ "Massiceti", "Daniela", "" ], [ "Zintgraf", "Luisa", "" ], [ "Bronskill", "John", "" ], [ "Theodorou", "Lida", "" ], [ "Harris", "Matthew Tobias", "" ], [ "Cutrell", "Edward", "" ], [ "Morrison", "Cecily", "" ], [ "Hofmann", "Katja", "" ], [ "Stumpf", "Simone", "" ] ]
Object recognition has made great advances in the last decade, but predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset and benchmark, grounded in the real-world application of teachable object recognizers for people who are blind/low-vision. The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones. The benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions. We set the benchmark's first state-of-the-art and show there is massive scope for further innovation, holding the potential to impact a broad range of real-world vision applications including tools for the blind/low-vision community. We release the dataset at https://doi.org/10.25383/city.14294597 and benchmark code at https://github.com/microsoft/ORBIT-Dataset.
2111.09625
Diego Garbervetsky
Saikat Dutta, Diego Garbervetsky, Shuvendu Lahiri, Max Sch\"afer
InspectJS: Leveraging Code Similarity and User-Feedback for Effective Taint Specification Inference for JavaScript
11 pages, sent to Software Engineering in Practice track at ICSE'2022
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Static analysis has established itself as a weapon of choice for detecting security vulnerabilities. Taint analysis in particular is a very general and powerful technique, where security policies are expressed in terms of forbidden flows, either from untrusted input sources to sensitive sinks (in integrity policies) or from sensitive sources to untrusted sinks (in confidentiality policies). The appeal of this approach is that the taint-tracking mechanism has to be implemented only once, and can then be parameterized with different taint specifications (that is, sets of sources and sinks, as well as any sanitizers that render otherwise problematic flows innocuous) to detect many different kinds of vulnerabilities. But while techniques for implementing scalable inter-procedural static taint tracking are fairly well established, crafting taint specifications is still more of an art than a science, and in practice tends to involve a lot of manual effort. Past work has focussed on automated techniques for inferring taint specifications for libraries either from their implementation or from the way they tend to be used in client code. Among the latter, machine learning-based approaches have shown great promise. In this work we present our experience combining an existing machine-learning approach to mining sink specifications for JavaScript libraries with manual taint modelling in the context of GitHub's CodeQL analysis framework. We show that the machine-learning component can successfully infer many new taint sinks that either are not part of the manual modelling or are not detected due to analysis incompleteness. Moreover, we present techniques for organizing sink predictions using automated ranking and code-similarity metrics that allow an analysis engineer to efficiently sift through large numbers of predictions to identify true positives.
[ { "created": "Thu, 18 Nov 2021 11:10:04 GMT", "version": "v1" } ]
2021-11-19
[ [ "Dutta", "Saikat", "" ], [ "Garbervetsky", "Diego", "" ], [ "Lahiri", "Shuvendu", "" ], [ "Schäfer", "Max", "" ] ]
Static analysis has established itself as a weapon of choice for detecting security vulnerabilities. Taint analysis in particular is a very general and powerful technique, where security policies are expressed in terms of forbidden flows, either from untrusted input sources to sensitive sinks (in integrity policies) or from sensitive sources to untrusted sinks (in confidentiality policies). The appeal of this approach is that the taint-tracking mechanism has to be implemented only once, and can then be parameterized with different taint specifications (that is, sets of sources and sinks, as well as any sanitizers that render otherwise problematic flows innocuous) to detect many different kinds of vulnerabilities. But while techniques for implementing scalable inter-procedural static taint tracking are fairly well established, crafting taint specifications is still more of an art than a science, and in practice tends to involve a lot of manual effort. Past work has focussed on automated techniques for inferring taint specifications for libraries either from their implementation or from the way they tend to be used in client code. Among the latter, machine learning-based approaches have shown great promise. In this work we present our experience combining an existing machine-learning approach to mining sink specifications for JavaScript libraries with manual taint modelling in the context of GitHub's CodeQL analysis framework. We show that the machine-learning component can successfully infer many new taint sinks that either are not part of the manual modelling or are not detected due to analysis incompleteness. Moreover, we present techniques for organizing sink predictions using automated ranking and code-similarity metrics that allow an analysis engineer to efficiently sift through large numbers of predictions to identify true positives.
2006.13534
Ehsan Asali
Ehsan Asali, Farzin Negahbani, Shahriyar Bamaei, Zahra Abbasi
Namira Soccer 2D Simulation Team Description Paper 2020
null
null
null
null
cs.RO cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we will discuss methods and ideas which are implemented on Namira 2D Soccer Simulation team in the recent year. Numerous scientific and programming activities were done in the process of code development, but we will mention the most outstanding ones in details. A Kalman filtering method for localization and two helpful software packages will be discussed here. Namira uses agent2d-3.1.1 as base code and librcsc-4.1.0 as library with some deliberate changes.
[ { "created": "Wed, 24 Jun 2020 07:40:44 GMT", "version": "v1" } ]
2020-06-25
[ [ "Asali", "Ehsan", "" ], [ "Negahbani", "Farzin", "" ], [ "Bamaei", "Shahriyar", "" ], [ "Abbasi", "Zahra", "" ] ]
In this article, we will discuss methods and ideas which are implemented on Namira 2D Soccer Simulation team in the recent year. Numerous scientific and programming activities were done in the process of code development, but we will mention the most outstanding ones in details. A Kalman filtering method for localization and two helpful software packages will be discussed here. Namira uses agent2d-3.1.1 as base code and librcsc-4.1.0 as library with some deliberate changes.
cs/0403027
Francesc Rossello
Jaume Casasnovas, Joe Miro, Manuel Moya, Francesc Rossello
An approach to membrane computing under inexactitude
20 pages, 0 figures
null
null
null
cs.OH cs.NE
null
In this paper we introduce a fuzzy version of symport/antiport membrane systems. Our fuzzy membrane systems handle possibly inexact copies of reactives and their rules are endowed with threshold functions that determine whether a rule can be applied or not to a given set of objects, depending of the degree of accuracy of these objects to the reactives specified in the rule. We prove that these fuzzy membrane systems generate exactly the recursively enumerable finite-valued fuzzy sets of natural numbers.
[ { "created": "Tue, 16 Mar 2004 09:02:39 GMT", "version": "v1" }, { "created": "Tue, 11 May 2004 08:27:15 GMT", "version": "v2" } ]
2007-05-23
[ [ "Casasnovas", "Jaume", "" ], [ "Miro", "Joe", "" ], [ "Moya", "Manuel", "" ], [ "Rossello", "Francesc", "" ] ]
In this paper we introduce a fuzzy version of symport/antiport membrane systems. Our fuzzy membrane systems handle possibly inexact copies of reactives and their rules are endowed with threshold functions that determine whether a rule can be applied or not to a given set of objects, depending of the degree of accuracy of these objects to the reactives specified in the rule. We prove that these fuzzy membrane systems generate exactly the recursively enumerable finite-valued fuzzy sets of natural numbers.
2401.05509
MohammadNoor Injadat
MohammadNoor Injadat
Optimized Ensemble Model Towards Secured Industrial IoT Devices
Accepted and presented in 24th International Arab Conference on Information Technology (ACIT'2023)
null
null
null
cs.CR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The continued growth in the deployment of Internet-of-Things (IoT) devices has been fueled by the increased connectivity demand, particularly in industrial environments. However, this has led to an increase in the number of network related attacks due to the increased number of potential attack surfaces. Industrial IoT (IIoT) devices are prone to various network related attacks that can have severe consequences on the manufacturing process as well as on the safety of the workers in the manufacturing plant. One promising solution that has emerged in recent years for attack detection is Machine learning (ML). More specifically, ensemble learning models have shown great promise in improving the performance of the underlying ML models. Accordingly, this paper proposes a framework based on the combined use of Bayesian Optimization-Gaussian Process (BO-GP) with an ensemble tree-based learning model to improve the performance of intrusion and attack detection in IIoT environments. The proposed framework's performance is evaluated using the Windows 10 dataset collected by the Cyber Range and IoT labs at University of New South Wales. Experimental results illustrate the improvement in detection accuracy, precision, and F-score when compared to standard tree and ensemble tree models.
[ { "created": "Wed, 10 Jan 2024 19:06:39 GMT", "version": "v1" } ]
2024-01-12
[ [ "Injadat", "MohammadNoor", "" ] ]
The continued growth in the deployment of Internet-of-Things (IoT) devices has been fueled by the increased connectivity demand, particularly in industrial environments. However, this has led to an increase in the number of network related attacks due to the increased number of potential attack surfaces. Industrial IoT (IIoT) devices are prone to various network related attacks that can have severe consequences on the manufacturing process as well as on the safety of the workers in the manufacturing plant. One promising solution that has emerged in recent years for attack detection is Machine learning (ML). More specifically, ensemble learning models have shown great promise in improving the performance of the underlying ML models. Accordingly, this paper proposes a framework based on the combined use of Bayesian Optimization-Gaussian Process (BO-GP) with an ensemble tree-based learning model to improve the performance of intrusion and attack detection in IIoT environments. The proposed framework's performance is evaluated using the Windows 10 dataset collected by the Cyber Range and IoT labs at University of New South Wales. Experimental results illustrate the improvement in detection accuracy, precision, and F-score when compared to standard tree and ensemble tree models.
1809.01898
Jo\~ao R. Campos
Jo\~ao R. Campos, Marco Vieira, Ernesto Costa
Propheticus: Generalizable Machine Learning Framework
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to recent technological developments, Machine Learning (ML), a subfield of Artificial Intelligence (AI), has been successfully used to process and extract knowledge from a variety of complex problems. However, a thorough ML approach is complex and highly dependent on the problem at hand. Additionally, implementing the logic required to execute the experiments is no small nor trivial deed, consequentially increasing the probability of faulty code which can compromise the results. Propheticus is a data-driven framework which results of the need for a tool that abstracts some of the inherent complexity of ML, whilst being easy to understand and use, as well as to adapt and expand to assist the user's specific needs. Propheticus systematizes and enforces various complex concepts of an ML experiment workflow, taking into account the nature of both the problem and the data. It contains functionalities to execute all the different tasks, from data preprocessing, to results analysis and comparison. Notwithstanding, it can be fairly easily adapted to different problems due to its flexible architecture, and customized as needed to address the user's needs.
[ { "created": "Thu, 6 Sep 2018 09:26:03 GMT", "version": "v1" } ]
2018-09-10
[ [ "Campos", "João R.", "" ], [ "Vieira", "Marco", "" ], [ "Costa", "Ernesto", "" ] ]
Due to recent technological developments, Machine Learning (ML), a subfield of Artificial Intelligence (AI), has been successfully used to process and extract knowledge from a variety of complex problems. However, a thorough ML approach is complex and highly dependent on the problem at hand. Additionally, implementing the logic required to execute the experiments is no small nor trivial deed, consequentially increasing the probability of faulty code which can compromise the results. Propheticus is a data-driven framework which results of the need for a tool that abstracts some of the inherent complexity of ML, whilst being easy to understand and use, as well as to adapt and expand to assist the user's specific needs. Propheticus systematizes and enforces various complex concepts of an ML experiment workflow, taking into account the nature of both the problem and the data. It contains functionalities to execute all the different tasks, from data preprocessing, to results analysis and comparison. Notwithstanding, it can be fairly easily adapted to different problems due to its flexible architecture, and customized as needed to address the user's needs.
1911.04382
Zhuo Feng
Zhuo Feng
GRASS: Graph Spectral Sparsification Leveraging Scalable Spectral Perturbation Analysis
14 pages, 13 figures. arXiv admin note: substantial text overlap with arXiv:1711.05135
null
null
null
cs.DS cs.NA cs.SI math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spectral graph sparsification aims to find ultra-sparse subgraphs whose Laplacian matrix can well approximate the original Laplacian eigenvalues and eigenvectors. In recent years, spectral sparsification techniques have been extensively studied for accelerating various numerical and graph-related applications. Prior nearly-linear-time spectral sparsification methods first extract low-stretch spanning tree from the original graph to form the backbone of the sparsifier, and then recover small portions of spectrally-critical off-tree edges to the spanning tree to significantly improve the approximation quality. However, it is not clear how many off-tree edges should be recovered for achieving a desired spectral similarity level within the sparsifier. Motivated by recent graph signal processing techniques, this paper proposes a similarity-aware spectral graph sparsification framework that leverages efficient spectral off-tree edge embedding and filtering schemes to construct spectral sparsifiers with guaranteed spectral similarity (relative condition number) level. An iterative graph densification scheme is also introduced to facilitate efficient and effective filtering of off-tree edges for highly ill-conditioned problems. The proposed method has been validated using various kinds of graphs obtained from public domain sparse matrix collections relevant to VLSI CAD, finite element analysis, as well as social and data networks frequently studied in many machine learning and data mining applications. For instance, a sparse SDD matrix with 40 million unknowns and 180 million nonzeros can be solved (1E-3 accuracy level) within two minutes using a single CPU core and about 6GB memory.
[ { "created": "Mon, 4 Nov 2019 00:47:36 GMT", "version": "v1" }, { "created": "Thu, 21 Nov 2019 12:33:59 GMT", "version": "v2" }, { "created": "Wed, 29 Apr 2020 01:17:42 GMT", "version": "v3" } ]
2020-04-30
[ [ "Feng", "Zhuo", "" ] ]
Spectral graph sparsification aims to find ultra-sparse subgraphs whose Laplacian matrix can well approximate the original Laplacian eigenvalues and eigenvectors. In recent years, spectral sparsification techniques have been extensively studied for accelerating various numerical and graph-related applications. Prior nearly-linear-time spectral sparsification methods first extract low-stretch spanning tree from the original graph to form the backbone of the sparsifier, and then recover small portions of spectrally-critical off-tree edges to the spanning tree to significantly improve the approximation quality. However, it is not clear how many off-tree edges should be recovered for achieving a desired spectral similarity level within the sparsifier. Motivated by recent graph signal processing techniques, this paper proposes a similarity-aware spectral graph sparsification framework that leverages efficient spectral off-tree edge embedding and filtering schemes to construct spectral sparsifiers with guaranteed spectral similarity (relative condition number) level. An iterative graph densification scheme is also introduced to facilitate efficient and effective filtering of off-tree edges for highly ill-conditioned problems. The proposed method has been validated using various kinds of graphs obtained from public domain sparse matrix collections relevant to VLSI CAD, finite element analysis, as well as social and data networks frequently studied in many machine learning and data mining applications. For instance, a sparse SDD matrix with 40 million unknowns and 180 million nonzeros can be solved (1E-3 accuracy level) within two minutes using a single CPU core and about 6GB memory.
2301.10638
Johanni Brea
Johanni Brea, Flavio Martinelli, Berfin \c{S}im\c{s}ek, Wulfram Gerstner
MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
MLPGradientFlow is a software package to solve numerically the gradient flow differential equation $\dot \theta = -\nabla \mathcal L(\theta; \mathcal D)$, where $\theta$ are the parameters of a multi-layer perceptron, $\mathcal D$ is some data set, and $\nabla \mathcal L$ is the gradient of a loss function. We show numerically that adaptive first- or higher-order integration methods based on Runge-Kutta schemes have better accuracy and convergence speed than gradient descent with the Adam optimizer. However, we find Newton's method and approximations like BFGS preferable to find fixed points (local and global minima of $\mathcal L$) efficiently and accurately. For small networks and data sets, gradients are usually computed faster than in pytorch and Hessian are computed at least $5\times$ faster. Additionally, the package features an integrator for a teacher-student setup with bias-free, two-layer networks trained with standard Gaussian input in the limit of infinite data. The code is accessible at https://github.com/jbrea/MLPGradientFlow.jl.
[ { "created": "Wed, 25 Jan 2023 15:21:44 GMT", "version": "v1" } ]
2023-01-26
[ [ "Brea", "Johanni", "" ], [ "Martinelli", "Flavio", "" ], [ "Şimşek", "Berfin", "" ], [ "Gerstner", "Wulfram", "" ] ]
MLPGradientFlow is a software package to solve numerically the gradient flow differential equation $\dot \theta = -\nabla \mathcal L(\theta; \mathcal D)$, where $\theta$ are the parameters of a multi-layer perceptron, $\mathcal D$ is some data set, and $\nabla \mathcal L$ is the gradient of a loss function. We show numerically that adaptive first- or higher-order integration methods based on Runge-Kutta schemes have better accuracy and convergence speed than gradient descent with the Adam optimizer. However, we find Newton's method and approximations like BFGS preferable to find fixed points (local and global minima of $\mathcal L$) efficiently and accurately. For small networks and data sets, gradients are usually computed faster than in pytorch and Hessian are computed at least $5\times$ faster. Additionally, the package features an integrator for a teacher-student setup with bias-free, two-layer networks trained with standard Gaussian input in the limit of infinite data. The code is accessible at https://github.com/jbrea/MLPGradientFlow.jl.
1901.00295
Xingjian Du
Xingjian Du, Mengyao Zhu, Xuan Shi, Xinpeng Zhang, Wen Zhang, Jingdong Chen
End-to-End Model for Speech Enhancement by Consistent Spectrogram Masking
null
null
null
null
cs.SD cs.AI cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, phase processing is attracting increasinginterest in speech enhancement community. Some researchersintegrate phase estimations module into speech enhancementmodels by using complex-valued short-time Fourier transform(STFT) spectrogram based training targets, e.g. Complex RatioMask (cRM) [1]. However, masking on spectrogram would violentits consistency constraints. In this work, we prove that theinconsistent problem enlarges the solution space of the speechenhancement model and causes unintended artifacts. ConsistencySpectrogram Masking (CSM) is proposed to estimate the complexspectrogram of a signal with the consistency constraint in asimple but not trivial way. The experiments comparing ourCSM based end-to-end model with other methods are conductedto confirm that the CSM accelerate the model training andhave significant improvements in speech quality. From ourexperimental results, we assured that our method could enha
[ { "created": "Wed, 2 Jan 2019 08:39:05 GMT", "version": "v1" } ]
2019-01-03
[ [ "Du", "Xingjian", "" ], [ "Zhu", "Mengyao", "" ], [ "Shi", "Xuan", "" ], [ "Zhang", "Xinpeng", "" ], [ "Zhang", "Wen", "" ], [ "Chen", "Jingdong", "" ] ]
Recently, phase processing is attracting increasinginterest in speech enhancement community. Some researchersintegrate phase estimations module into speech enhancementmodels by using complex-valued short-time Fourier transform(STFT) spectrogram based training targets, e.g. Complex RatioMask (cRM) [1]. However, masking on spectrogram would violentits consistency constraints. In this work, we prove that theinconsistent problem enlarges the solution space of the speechenhancement model and causes unintended artifacts. ConsistencySpectrogram Masking (CSM) is proposed to estimate the complexspectrogram of a signal with the consistency constraint in asimple but not trivial way. The experiments comparing ourCSM based end-to-end model with other methods are conductedto confirm that the CSM accelerate the model training andhave significant improvements in speech quality. From ourexperimental results, we assured that our method could enha
2012.01101
Kim Phuc Tran
Zhenglei He, Kim Phuc Tran (GEMTEX), Sebastien Thomassey, Xianyi Zeng, Jie Xu, Changhai Yi
Multi-Objective Optimization of the Textile Manufacturing Process Using Deep-Q-Network Based Multi-Agent Reinforcement Learning
null
null
null
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-objective optimization of the textile manufacturing process is an increasing challenge because of the growing complexity involved in the development of the textile industry. The use of intelligent techniques has been often discussed in this domain, although a significant improvement from certain successful applications has been reported, the traditional methods failed to work with high-as well as human intervention. Upon which, this paper proposed a multi-agent reinforcement learning (MARL) framework to transform the optimization process into a stochastic game and introduced the deep Q-networks algorithm to train the multiple agents. A utilitarian selection mechanism was employed in the stochastic game, which (-greedy policy) in each state to avoid the interruption of multiple equilibria and achieve the correlated equilibrium optimal solutions of the optimizing process. The case study result reflects that the proposed MARL system is possible to achieve the optimal solutions for the textile ozonation process and it performs better than the traditional approaches.
[ { "created": "Wed, 2 Dec 2020 11:37:44 GMT", "version": "v1" } ]
2020-12-03
[ [ "He", "Zhenglei", "", "GEMTEX" ], [ "Tran", "Kim Phuc", "", "GEMTEX" ], [ "Thomassey", "Sebastien", "" ], [ "Zeng", "Xianyi", "" ], [ "Xu", "Jie", "" ], [ "Yi", "Changhai", "" ] ]
Multi-objective optimization of the textile manufacturing process is an increasing challenge because of the growing complexity involved in the development of the textile industry. The use of intelligent techniques has been often discussed in this domain, although a significant improvement from certain successful applications has been reported, the traditional methods failed to work with high-as well as human intervention. Upon which, this paper proposed a multi-agent reinforcement learning (MARL) framework to transform the optimization process into a stochastic game and introduced the deep Q-networks algorithm to train the multiple agents. A utilitarian selection mechanism was employed in the stochastic game, which (-greedy policy) in each state to avoid the interruption of multiple equilibria and achieve the correlated equilibrium optimal solutions of the optimizing process. The case study result reflects that the proposed MARL system is possible to achieve the optimal solutions for the textile ozonation process and it performs better than the traditional approaches.
2303.11546
Sunghwan Kim
Sunghwan Kim, Dae-hwan Kim, Hoseong Kim
Texture Learning Domain Randomization for Domain Generalized Segmentation
ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep Neural Networks (DNNs)-based semantic segmentation models trained on a source domain often struggle to generalize to unseen target domains, i.e., a domain gap problem. Texture often contributes to the domain gap, making DNNs vulnerable to domain shift because they are prone to be texture-biased. Existing Domain Generalized Semantic Segmentation (DGSS) methods have alleviated the domain gap problem by guiding models to prioritize shape over texture. On the other hand, shape and texture are two prominent and complementary cues in semantic segmentation. This paper argues that leveraging texture is crucial for improving performance in DGSS. Specifically, we propose a novel framework, coined Texture Learning Domain Randomization (TLDR). TLDR includes two novel losses to effectively enhance texture learning in DGSS: (1) a texture regularization loss to prevent overfitting to source domain textures by using texture features from an ImageNet pre-trained model and (2) a texture generalization loss that utilizes random style images to learn diverse texture representations in a self-supervised manner. Extensive experimental results demonstrate the superiority of the proposed TLDR; e.g., TLDR achieves 46.5 mIoU on GTA-to-Cityscapes using ResNet-50, which improves the prior state-of-the-art method by 1.9 mIoU. The source code is available at https://github.com/ssssshwan/TLDR.
[ { "created": "Tue, 21 Mar 2023 02:23:26 GMT", "version": "v1" }, { "created": "Thu, 17 Aug 2023 10:39:37 GMT", "version": "v2" } ]
2023-08-21
[ [ "Kim", "Sunghwan", "" ], [ "Kim", "Dae-hwan", "" ], [ "Kim", "Hoseong", "" ] ]
Deep Neural Networks (DNNs)-based semantic segmentation models trained on a source domain often struggle to generalize to unseen target domains, i.e., a domain gap problem. Texture often contributes to the domain gap, making DNNs vulnerable to domain shift because they are prone to be texture-biased. Existing Domain Generalized Semantic Segmentation (DGSS) methods have alleviated the domain gap problem by guiding models to prioritize shape over texture. On the other hand, shape and texture are two prominent and complementary cues in semantic segmentation. This paper argues that leveraging texture is crucial for improving performance in DGSS. Specifically, we propose a novel framework, coined Texture Learning Domain Randomization (TLDR). TLDR includes two novel losses to effectively enhance texture learning in DGSS: (1) a texture regularization loss to prevent overfitting to source domain textures by using texture features from an ImageNet pre-trained model and (2) a texture generalization loss that utilizes random style images to learn diverse texture representations in a self-supervised manner. Extensive experimental results demonstrate the superiority of the proposed TLDR; e.g., TLDR achieves 46.5 mIoU on GTA-to-Cityscapes using ResNet-50, which improves the prior state-of-the-art method by 1.9 mIoU. The source code is available at https://github.com/ssssshwan/TLDR.
2302.06396
Thibaut Verron
Manuel Kauers, Christoph Koutschan, Thibaut Verron
Transcendence Certificates for D-finite Functions
9 pages, 1 figure
Proceedings of International Symposium on Symbolic and Algebraic Computation 2023
10.1145/3597066.3597091
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although in theory we can decide whether a given D-finite function is transcendental, transcendence proofs remain a challenge in practice. Typically, transcendence is certified by checking certain incomplete sufficient conditions. In this paper we propose an additional such condition which catches some cases on which other tests fail.
[ { "created": "Mon, 13 Feb 2023 14:36:20 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2023 12:14:17 GMT", "version": "v2" } ]
2023-09-20
[ [ "Kauers", "Manuel", "" ], [ "Koutschan", "Christoph", "" ], [ "Verron", "Thibaut", "" ] ]
Although in theory we can decide whether a given D-finite function is transcendental, transcendence proofs remain a challenge in practice. Typically, transcendence is certified by checking certain incomplete sufficient conditions. In this paper we propose an additional such condition which catches some cases on which other tests fail.
1712.01794
Svetlana Kiritchenko
Svetlana Kiritchenko and Saif M. Mohammad
The Effect of Negators, Modals, and Degree Adverbs on Sentiment Composition
In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), San Diego, California, 2016
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Negators, modals, and degree adverbs can significantly affect the sentiment of the words they modify. Often, their impact is modeled with simple heuristics; although, recent work has shown that such heuristics do not capture the true sentiment of multi-word phrases. We created a dataset of phrases that include various negators, modals, and degree adverbs, as well as their combinations. Both the phrases and their constituent content words were annotated with real-valued scores of sentiment association. Using phrasal terms in the created dataset, we analyze the impact of individual modifiers and the average effect of the groups of modifiers on overall sentiment. We find that the effect of modifiers varies substantially among the members of the same group. Furthermore, each individual modifier can affect sentiment words in different ways. Therefore, solutions based on statistical learning seem more promising than fixed hand-crafted rules on the task of automatic sentiment prediction.
[ { "created": "Tue, 5 Dec 2017 18:17:43 GMT", "version": "v1" } ]
2017-12-06
[ [ "Kiritchenko", "Svetlana", "" ], [ "Mohammad", "Saif M.", "" ] ]
Negators, modals, and degree adverbs can significantly affect the sentiment of the words they modify. Often, their impact is modeled with simple heuristics; although, recent work has shown that such heuristics do not capture the true sentiment of multi-word phrases. We created a dataset of phrases that include various negators, modals, and degree adverbs, as well as their combinations. Both the phrases and their constituent content words were annotated with real-valued scores of sentiment association. Using phrasal terms in the created dataset, we analyze the impact of individual modifiers and the average effect of the groups of modifiers on overall sentiment. We find that the effect of modifiers varies substantially among the members of the same group. Furthermore, each individual modifier can affect sentiment words in different ways. Therefore, solutions based on statistical learning seem more promising than fixed hand-crafted rules on the task of automatic sentiment prediction.
2310.05932
Mian Ibad Ali Shah
Mian Ibad Ali Shah, Abdul Wahid, Enda Barrett, Karl Mason
A Multi-Agent Systems Approach for Peer-to-Peer Energy Trading in Dairy Farming
Proc. of the Artificial Intelligence for Sustainability, ECAI 2023, Eunika et al. (eds.), Sep 30- Oct 1, 2023, https://sites.google.com/view/ai4s. 2023
null
null
null
cs.MA cs.AI cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To achieve desired carbon emission reductions, integrating renewable generation and accelerating the adoption of peer-to-peer energy trading is crucial. This is especially important for energy-intensive farming, like dairy farming. However, integrating renewables and peer-to-peer trading presents challenges. To address this, we propose the Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES), enabling dairy farms to participate in peer-to-peer markets. Our strategy reduces electricity costs and peak demand by approximately 30% and 24% respectively, while increasing energy sales by 37% compared to the baseline scenario without P2P trading. This demonstrates the effectiveness of our approach.
[ { "created": "Mon, 21 Aug 2023 13:22:20 GMT", "version": "v1" } ]
2023-10-11
[ [ "Shah", "Mian Ibad Ali", "" ], [ "Wahid", "Abdul", "" ], [ "Barrett", "Enda", "" ], [ "Mason", "Karl", "" ] ]
To achieve desired carbon emission reductions, integrating renewable generation and accelerating the adoption of peer-to-peer energy trading is crucial. This is especially important for energy-intensive farming, like dairy farming. However, integrating renewables and peer-to-peer trading presents challenges. To address this, we propose the Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES), enabling dairy farms to participate in peer-to-peer markets. Our strategy reduces electricity costs and peak demand by approximately 30% and 24% respectively, while increasing energy sales by 37% compared to the baseline scenario without P2P trading. This demonstrates the effectiveness of our approach.
1910.00974
Mutaz Melhem
Mutaz Y. Melhem and Laszlo B. Kish
Generalized DC loop current attack against the KLJN secure key exchange scheme
11 pages, 6 Figures, and Journal paper
Metrol. Meas. Syst., Vol. 26 (2019) No. 4, pp. 607-616
10.24425/mms.2019.130571
null
cs.ET cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new attack against the Kirchhoff Law Johnson Noise (KLJN) secure key distribution system is studied with unknown parasitic DC voltage sources at both Alices and Bobs ends. This paper is the generalization of our earlier investigation with a single end parasitic source. Under the assumption that Eve does not know the values of the parasitic sources, a new attack, utilizing the current generated by the parasitic dc voltage sources, is introduced. The attack is mathematically analyzed and demonstrated by computer simulations. Simple defense methods against the attack are shown. The earlier defense method based solely on the comparison of current/voltage data at Alice's and Bob's terminals is useless here since the wire currents and voltages are equal at both ends. However, the more expensive version of the earlier defense method, which is based on in situ system simulation and comparison with measurements, works efficiently.
[ { "created": "Mon, 30 Sep 2019 19:34:32 GMT", "version": "v1" } ]
2020-04-07
[ [ "Melhem", "Mutaz Y.", "" ], [ "Kish", "Laszlo B.", "" ] ]
A new attack against the Kirchhoff Law Johnson Noise (KLJN) secure key distribution system is studied with unknown parasitic DC voltage sources at both Alices and Bobs ends. This paper is the generalization of our earlier investigation with a single end parasitic source. Under the assumption that Eve does not know the values of the parasitic sources, a new attack, utilizing the current generated by the parasitic dc voltage sources, is introduced. The attack is mathematically analyzed and demonstrated by computer simulations. Simple defense methods against the attack are shown. The earlier defense method based solely on the comparison of current/voltage data at Alice's and Bob's terminals is useless here since the wire currents and voltages are equal at both ends. However, the more expensive version of the earlier defense method, which is based on in situ system simulation and comparison with measurements, works efficiently.
2305.03130
Kaixin Ma
Kaixin Ma, Hao Cheng, Yu Zhang, Xiaodong Liu, Eric Nyberg, Jianfeng Gao
Chain-of-Skills: A Configurable Model for Open-domain Question Answering
ACL 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The retrieval model is an indispensable component for real-world knowledge-intensive tasks, e.g., open-domain question answering (ODQA). As separate retrieval skills are annotated for different datasets, recent work focuses on customized methods, limiting the model transferability and scalability. In this work, we propose a modular retriever where individual modules correspond to key skills that can be reused across datasets. Our approach supports flexible skill configurations based on the target domain to boost performance. To mitigate task interference, we design a novel modularization parameterization inspired by sparse Transformer. We demonstrate that our model can benefit from self-supervised pretraining on Wikipedia and fine-tuning using multiple ODQA datasets, both in a multi-task fashion. Our approach outperforms recent self-supervised retrievers in zero-shot evaluations and achieves state-of-the-art fine-tuned retrieval performance on NQ, HotpotQA and OTT-QA.
[ { "created": "Thu, 4 May 2023 20:19:39 GMT", "version": "v1" }, { "created": "Fri, 26 May 2023 17:19:58 GMT", "version": "v2" } ]
2023-05-29
[ [ "Ma", "Kaixin", "" ], [ "Cheng", "Hao", "" ], [ "Zhang", "Yu", "" ], [ "Liu", "Xiaodong", "" ], [ "Nyberg", "Eric", "" ], [ "Gao", "Jianfeng", "" ] ]
The retrieval model is an indispensable component for real-world knowledge-intensive tasks, e.g., open-domain question answering (ODQA). As separate retrieval skills are annotated for different datasets, recent work focuses on customized methods, limiting the model transferability and scalability. In this work, we propose a modular retriever where individual modules correspond to key skills that can be reused across datasets. Our approach supports flexible skill configurations based on the target domain to boost performance. To mitigate task interference, we design a novel modularization parameterization inspired by sparse Transformer. We demonstrate that our model can benefit from self-supervised pretraining on Wikipedia and fine-tuning using multiple ODQA datasets, both in a multi-task fashion. Our approach outperforms recent self-supervised retrievers in zero-shot evaluations and achieves state-of-the-art fine-tuned retrieval performance on NQ, HotpotQA and OTT-QA.
1808.01990
Fatih Cakir
Fatih Cakir, Kun He, Stan Sclaroff
Hashing with Binary Matrix Pursuit
23 pages, 4 figures. In Proceedings of European Conference on Computer Vision (ECCV), 2018
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose theoretical and empirical improvements for two-stage hashing methods. We first provide a theoretical analysis on the quality of the binary codes and show that, under mild assumptions, a residual learning scheme can construct binary codes that fit any neighborhood structure with arbitrary accuracy. Secondly, we show that with high-capacity hash functions such as CNNs, binary code inference can be greatly simplified for many standard neighborhood definitions, yielding smaller optimization problems and more robust codes. Incorporating our findings, we propose a novel two-stage hashing method that significantly outperforms previous hashing studies on widely used image retrieval benchmarks.
[ { "created": "Mon, 6 Aug 2018 16:51:36 GMT", "version": "v1" } ]
2018-08-07
[ [ "Cakir", "Fatih", "" ], [ "He", "Kun", "" ], [ "Sclaroff", "Stan", "" ] ]
We propose theoretical and empirical improvements for two-stage hashing methods. We first provide a theoretical analysis on the quality of the binary codes and show that, under mild assumptions, a residual learning scheme can construct binary codes that fit any neighborhood structure with arbitrary accuracy. Secondly, we show that with high-capacity hash functions such as CNNs, binary code inference can be greatly simplified for many standard neighborhood definitions, yielding smaller optimization problems and more robust codes. Incorporating our findings, we propose a novel two-stage hashing method that significantly outperforms previous hashing studies on widely used image retrieval benchmarks.
1701.07193
Leon Abdillah
Leon Andretti Abdillah
Exploring Students Blended Learning Through Social Media
10 pages
ComTech (Computer, Mathematics and Engineering Applications), 7(4), 245-254 (2016)
10.21512/comtech.v7i4.2495
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information technology (IT) has been used widely in many aspects of our daily life. After discuss politics related aspects for some articles. In this article author would like to discuss social media for students learning environment. Social media as a leading application on the internet has changed many aspects of life become more globalized. This article discusses the use of social media to support learning activities for students in the faculty of computer science. The author uses Facebook and WordPress as an alternative to electronic learning: 1) online attendance tool, 2) media storage and dissemination of course materials, 3) event scheduling for the lectures. Social media succeed to change the way of modern learning styles and environment. The results of this study are some learning activities such as : 1) Preparation, 2) Weekly meeting activities, 3) Course Page, 4) Social Media as Online Attendance Tool, 5) Social Media as Learning Repository and Dissemination, and 6) Social Media as Online Event Scheduling.
[ { "created": "Wed, 25 Jan 2017 07:41:05 GMT", "version": "v1" } ]
2018-04-24
[ [ "Abdillah", "Leon Andretti", "" ] ]
Information technology (IT) has been used widely in many aspects of our daily life. After discuss politics related aspects for some articles. In this article author would like to discuss social media for students learning environment. Social media as a leading application on the internet has changed many aspects of life become more globalized. This article discusses the use of social media to support learning activities for students in the faculty of computer science. The author uses Facebook and WordPress as an alternative to electronic learning: 1) online attendance tool, 2) media storage and dissemination of course materials, 3) event scheduling for the lectures. Social media succeed to change the way of modern learning styles and environment. The results of this study are some learning activities such as : 1) Preparation, 2) Weekly meeting activities, 3) Course Page, 4) Social Media as Online Attendance Tool, 5) Social Media as Learning Repository and Dissemination, and 6) Social Media as Online Event Scheduling.
2007.15879
Sina Sharif Mansouri
Sina Sharif Mansouri, Christoforos Kanellakis, Bjorn Lindqvist, Farhad Pourkamali-Anaraki, Ali-akbar Agha-mohammadi, Joel Burdick and George Nikolakopoulos
A Unified NMPC Scheme for MAVs Navigation with 3D Collision Avoidance under Position Uncertainty
null
IEEE Robotics and Automation Letters, Volume 5, Issue 4, On Page(s) 5740-5747, October 2020
10.1109/LRA.2020.3010485
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes a novel Nonlinear Model Predictive Control (NMPC) framework for Micro Aerial Vehicle (MAV) autonomous navigation in constrained environments. The introduced framework allows us to consider the nonlinear dynamics of MAVs and guarantees real-time performance. Our first contribution is to design a computationally efficient subspace clustering method to reveal from geometrical constraints to underlying constraint planes within a 3D point cloud, obtained from a 3D lidar scanner. The second contribution of our work is to incorporate the extracted information into the nonlinear constraints of NMPC for avoiding collisions. Our third contribution focuses on making the controller robust by considering the uncertainty of localization and NMPC using the Shannon entropy. This step enables us to track either the position or velocity references, or none of them if necessary. As a result, the collision avoidance constraints are defined in the local coordinates of MAVs and it remains active and guarantees collision avoidance, despite localization uncertainties, e.g., position estimation drifts. Additionally, as the platform continues the mission, this will result in less uncertain position estimations, due to the feature extraction and loop closure. The efficacy of the suggested framework has been evaluated using various simulations in the Gazebo environment.
[ { "created": "Fri, 31 Jul 2020 07:26:49 GMT", "version": "v1" } ]
2020-08-03
[ [ "Mansouri", "Sina Sharif", "" ], [ "Kanellakis", "Christoforos", "" ], [ "Lindqvist", "Bjorn", "" ], [ "Pourkamali-Anaraki", "Farhad", "" ], [ "Agha-mohammadi", "Ali-akbar", "" ], [ "Burdick", "Joel", "" ], [ "Nikolakopoulos", "George", "" ] ]
This article proposes a novel Nonlinear Model Predictive Control (NMPC) framework for Micro Aerial Vehicle (MAV) autonomous navigation in constrained environments. The introduced framework allows us to consider the nonlinear dynamics of MAVs and guarantees real-time performance. Our first contribution is to design a computationally efficient subspace clustering method to reveal from geometrical constraints to underlying constraint planes within a 3D point cloud, obtained from a 3D lidar scanner. The second contribution of our work is to incorporate the extracted information into the nonlinear constraints of NMPC for avoiding collisions. Our third contribution focuses on making the controller robust by considering the uncertainty of localization and NMPC using the Shannon entropy. This step enables us to track either the position or velocity references, or none of them if necessary. As a result, the collision avoidance constraints are defined in the local coordinates of MAVs and it remains active and guarantees collision avoidance, despite localization uncertainties, e.g., position estimation drifts. Additionally, as the platform continues the mission, this will result in less uncertain position estimations, due to the feature extraction and loop closure. The efficacy of the suggested framework has been evaluated using various simulations in the Gazebo environment.
2401.17699
Jun Wan
Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng, Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei
Unified Physical-Digital Face Attack Detection
12 pages, 8 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks. However, previous related work rarely considers both situations at the same time. This implies the deployment of multiple models and thus more computational burden. The main reasons for this lack of an integrated model are caused by two factors: (1) The lack of a dataset including both physical and digital attacks with ID consistency which means the same ID covers the real face and all attack types; (2) Given the large intra-class variance between these two attacks, it is difficult to learn a compact feature space to detect both attacks simultaneously. To address these issues, we collect a Unified physical-digital Attack dataset, called UniAttackData. The dataset consists of $1,800$ participations of 2 and 12 physical and digital attacks, respectively, resulting in a total of 29,706 videos. Then, we propose a Unified Attack Detection framework based on Vision-Language Models (VLMs), namely UniAttackDetection, which includes three main modules: the Teacher-Student Prompts (TSP) module, focused on acquiring unified and specific knowledge respectively; the Unified Knowledge Mining (UKM) module, designed to capture a comprehensive feature space; and the Sample-Level Prompt Interaction (SLPI) module, aimed at grasping sample-level semantics. These three modules seamlessly form a robust unified attack detection framework. Extensive experiments on UniAttackData and three other datasets demonstrate the superiority of our approach for unified face attack detection.
[ { "created": "Wed, 31 Jan 2024 09:38:44 GMT", "version": "v1" } ]
2024-02-01
[ [ "Fang", "Hao", "" ], [ "Liu", "Ajian", "" ], [ "Yuan", "Haocheng", "" ], [ "Zheng", "Junze", "" ], [ "Zeng", "Dingheng", "" ], [ "Liu", "Yanhong", "" ], [ "Deng", "Jiankang", "" ], [ "Escalera", "Sergio", "" ], [ "Liu", "Xiaoming", "" ], [ "Wan", "Jun", "" ], [ "Lei", "Zhen", "" ] ]
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks. However, previous related work rarely considers both situations at the same time. This implies the deployment of multiple models and thus more computational burden. The main reasons for this lack of an integrated model are caused by two factors: (1) The lack of a dataset including both physical and digital attacks with ID consistency which means the same ID covers the real face and all attack types; (2) Given the large intra-class variance between these two attacks, it is difficult to learn a compact feature space to detect both attacks simultaneously. To address these issues, we collect a Unified physical-digital Attack dataset, called UniAttackData. The dataset consists of $1,800$ participations of 2 and 12 physical and digital attacks, respectively, resulting in a total of 29,706 videos. Then, we propose a Unified Attack Detection framework based on Vision-Language Models (VLMs), namely UniAttackDetection, which includes three main modules: the Teacher-Student Prompts (TSP) module, focused on acquiring unified and specific knowledge respectively; the Unified Knowledge Mining (UKM) module, designed to capture a comprehensive feature space; and the Sample-Level Prompt Interaction (SLPI) module, aimed at grasping sample-level semantics. These three modules seamlessly form a robust unified attack detection framework. Extensive experiments on UniAttackData and three other datasets demonstrate the superiority of our approach for unified face attack detection.
2107.05138
S. Rasoul Etesami
S. Rasoul Etesami
Open-Loop Equilibrium Strategies for Dynamic Influence Maximization Game Over Social Networks
null
null
null
null
cs.GT cs.MA cs.SY eess.SY math.OC
http://creativecommons.org/publicdomain/zero/1.0/
We consider the problem of budget allocation for competitive influence maximization over social networks. In this problem, multiple competing parties (players) want to distribute their limited advertising resources over a set of social individuals to maximize their long-run cumulative payoffs. It is assumed that the individuals are connected via a social network and update their opinions based on the classical DeGroot model. The players must decide the budget distribution among the individuals at a finite number of campaign times to maximize their overall payoff given as a function of individuals' opinions. We show that i) the optimal investment strategy for the case of a single-player can be found in polynomial time by solving a concave program, and ii) the open-loop equilibrium strategies for the multiplayer dynamic game can be computed efficiently by following natural regret minimization dynamics. Our results extend the earlier work on the static version of the problem to a dynamic multistage game.
[ { "created": "Sun, 11 Jul 2021 22:31:08 GMT", "version": "v1" }, { "created": "Mon, 30 Aug 2021 04:07:57 GMT", "version": "v2" } ]
2021-08-31
[ [ "Etesami", "S. Rasoul", "" ] ]
We consider the problem of budget allocation for competitive influence maximization over social networks. In this problem, multiple competing parties (players) want to distribute their limited advertising resources over a set of social individuals to maximize their long-run cumulative payoffs. It is assumed that the individuals are connected via a social network and update their opinions based on the classical DeGroot model. The players must decide the budget distribution among the individuals at a finite number of campaign times to maximize their overall payoff given as a function of individuals' opinions. We show that i) the optimal investment strategy for the case of a single-player can be found in polynomial time by solving a concave program, and ii) the open-loop equilibrium strategies for the multiplayer dynamic game can be computed efficiently by following natural regret minimization dynamics. Our results extend the earlier work on the static version of the problem to a dynamic multistage game.
2102.01173
Tony Zhao
Tony Zhao, Irving Fang, Jeffrey Kim, Gerald Friedland
Multi-modal Ensemble Models for Predicting Video Memorability
null
null
null
null
cs.LG cs.AI cs.MM
http://creativecommons.org/licenses/by/4.0/
Modeling media memorability has been a consistent challenge in the field of machine learning. The Predicting Media Memorability task in MediaEval2020 is the latest benchmark among similar challenges addressing this topic. Building upon techniques developed in previous iterations of the challenge, we developed ensemble methods with the use of extracted video, image, text, and audio features. Critically, in this work we introduce and demonstrate the efficacy and high generalizability of extracted audio embeddings as a feature for the task of predicting media memorability.
[ { "created": "Mon, 1 Feb 2021 21:16:52 GMT", "version": "v1" } ]
2021-02-03
[ [ "Zhao", "Tony", "" ], [ "Fang", "Irving", "" ], [ "Kim", "Jeffrey", "" ], [ "Friedland", "Gerald", "" ] ]
Modeling media memorability has been a consistent challenge in the field of machine learning. The Predicting Media Memorability task in MediaEval2020 is the latest benchmark among similar challenges addressing this topic. Building upon techniques developed in previous iterations of the challenge, we developed ensemble methods with the use of extracted video, image, text, and audio features. Critically, in this work we introduce and demonstrate the efficacy and high generalizability of extracted audio embeddings as a feature for the task of predicting media memorability.
1911.10835
Vil\'em Zouhar
Vil\'em Zouhar and Ond\v{r}ej Bojar
Outbound Translation User Interface Ptakopet: A Pilot Study
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is not uncommon for Internet users to have to produce a text in a foreign language they have very little knowledge of and are unable to verify the translation quality. We call the task "outbound translation" and explore it by introducing an open-source modular system Ptakop\v{e}t. Its main purpose is to inspect human interaction with MT systems enhanced with additional subsystems, such as backward translation and quality estimation. We follow up with an experiment on (Czech) human annotators tasked to produce questions in a language they do not speak (German), with the help of Ptakop\v{e}t. We focus on three real-world use cases (communication with IT support, describing administrative issues and asking encyclopedic questions) from which we gain insight into different strategies users take when faced with outbound translation tasks. Round trip translation is known to be unreliable for evaluating MT systems but our experimental evaluation documents that it works very well for users, at least on MT systems of mid-range quality.
[ { "created": "Mon, 25 Nov 2019 11:22:45 GMT", "version": "v1" }, { "created": "Thu, 5 Mar 2020 17:40:27 GMT", "version": "v2" } ]
2020-03-06
[ [ "Zouhar", "Vilém", "" ], [ "Bojar", "Ondřej", "" ] ]
It is not uncommon for Internet users to have to produce a text in a foreign language they have very little knowledge of and are unable to verify the translation quality. We call the task "outbound translation" and explore it by introducing an open-source modular system Ptakop\v{e}t. Its main purpose is to inspect human interaction with MT systems enhanced with additional subsystems, such as backward translation and quality estimation. We follow up with an experiment on (Czech) human annotators tasked to produce questions in a language they do not speak (German), with the help of Ptakop\v{e}t. We focus on three real-world use cases (communication with IT support, describing administrative issues and asking encyclopedic questions) from which we gain insight into different strategies users take when faced with outbound translation tasks. Round trip translation is known to be unreliable for evaluating MT systems but our experimental evaluation documents that it works very well for users, at least on MT systems of mid-range quality.
2005.13820
Huaxi Huang
Huaxi Huang, Junjie Zhang, Jian Zhang, Qiang Wu, Chang Xu
TOAN: Target-Oriented Alignment Network for Fine-Grained Image Categorization with Few Labeled Samples
T-CSVT Accepted
null
10.1109/TCSVT.2021.3065693
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The challenges of high intra-class variance yet low inter-class fluctuations in fine-grained visual categorization are more severe with few labeled samples, \textit{i.e.,} Fine-Grained categorization problems under the Few-Shot setting (FGFS). High-order features are usually developed to uncover subtle differences between sub-categories in FGFS, but they are less effective in handling the high intra-class variance. In this paper, we propose a Target-Oriented Alignment Network (TOAN) to investigate the fine-grained relation between the target query image and support classes. The feature of each support image is transformed to match the query ones in the embedding feature space, which reduces the disparity explicitly within each category. Moreover, different from existing FGFS approaches devise the high-order features over the global image with less explicit consideration of discriminative parts, we generate discriminative fine-grained features by integrating compositional concept representations to global second-order pooling. Extensive experiments are conducted on four fine-grained benchmarks to demonstrate the effectiveness of TOAN compared with the state-of-the-art models.
[ { "created": "Thu, 28 May 2020 07:48:44 GMT", "version": "v1" }, { "created": "Wed, 10 Mar 2021 05:40:46 GMT", "version": "v2" } ]
2021-04-02
[ [ "Huang", "Huaxi", "" ], [ "Zhang", "Junjie", "" ], [ "Zhang", "Jian", "" ], [ "Wu", "Qiang", "" ], [ "Xu", "Chang", "" ] ]
The challenges of high intra-class variance yet low inter-class fluctuations in fine-grained visual categorization are more severe with few labeled samples, \textit{i.e.,} Fine-Grained categorization problems under the Few-Shot setting (FGFS). High-order features are usually developed to uncover subtle differences between sub-categories in FGFS, but they are less effective in handling the high intra-class variance. In this paper, we propose a Target-Oriented Alignment Network (TOAN) to investigate the fine-grained relation between the target query image and support classes. The feature of each support image is transformed to match the query ones in the embedding feature space, which reduces the disparity explicitly within each category. Moreover, different from existing FGFS approaches devise the high-order features over the global image with less explicit consideration of discriminative parts, we generate discriminative fine-grained features by integrating compositional concept representations to global second-order pooling. Extensive experiments are conducted on four fine-grained benchmarks to demonstrate the effectiveness of TOAN compared with the state-of-the-art models.
2206.13829
Davide Alessandro Coccomini
Davide Alessandro Coccomini, Roberto Caldelli, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
Cross-Forgery Analysis of Vision Transformers and CNNs for Deepfake Image Detection
null
null
10.1145/3512732.3533582
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deepfake Generation Techniques are evolving at a rapid pace, making it possible to create realistic manipulated images and videos and endangering the serenity of modern society. The continual emergence of new and varied techniques brings with it a further problem to be faced, namely the ability of deepfake detection models to update themselves promptly in order to be able to identify manipulations carried out using even the most recent methods. This is an extremely complex problem to solve, as training a model requires large amounts of data, which are difficult to obtain if the deepfake generation method is too recent. Moreover, continuously retraining a network would be unfeasible. In this paper, we ask ourselves if, among the various deep learning techniques, there is one that is able to generalise the concept of deepfake to such an extent that it does not remain tied to one or more specific deepfake generation methods used in the training set. We compared a Vision Transformer with an EfficientNetV2 on a cross-forgery context based on the ForgeryNet dataset. From our experiments, It emerges that EfficientNetV2 has a greater tendency to specialize often obtaining better results on training methods while Vision Transformers exhibit a superior generalization ability that makes them more competent even on images generated with new methodologies.
[ { "created": "Tue, 28 Jun 2022 08:50:22 GMT", "version": "v1" } ]
2022-06-29
[ [ "Coccomini", "Davide Alessandro", "" ], [ "Caldelli", "Roberto", "" ], [ "Falchi", "Fabrizio", "" ], [ "Gennaro", "Claudio", "" ], [ "Amato", "Giuseppe", "" ] ]
Deepfake Generation Techniques are evolving at a rapid pace, making it possible to create realistic manipulated images and videos and endangering the serenity of modern society. The continual emergence of new and varied techniques brings with it a further problem to be faced, namely the ability of deepfake detection models to update themselves promptly in order to be able to identify manipulations carried out using even the most recent methods. This is an extremely complex problem to solve, as training a model requires large amounts of data, which are difficult to obtain if the deepfake generation method is too recent. Moreover, continuously retraining a network would be unfeasible. In this paper, we ask ourselves if, among the various deep learning techniques, there is one that is able to generalise the concept of deepfake to such an extent that it does not remain tied to one or more specific deepfake generation methods used in the training set. We compared a Vision Transformer with an EfficientNetV2 on a cross-forgery context based on the ForgeryNet dataset. From our experiments, It emerges that EfficientNetV2 has a greater tendency to specialize often obtaining better results on training methods while Vision Transformers exhibit a superior generalization ability that makes them more competent even on images generated with new methodologies.
2312.01256
Chenglu Jin
Niloufar Sayadi, Phuong Ha Nguyen, Marten van Dijk, Chenglu Jin
Breaking XOR Arbiter PUFs without Reliability Information
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unreliable XOR Arbiter PUFs were broken by a machine learning attack, which targets the underlying Arbiter PUFs individually. However, reliability information from the PUF was required for this attack. We show that, for the first time, a perfectly reliable XOR Arbiter PUF, where no reliability information is accessible, can be efficiently attacked in the same divide-and-conquer manner. Our key insight is that the responses of correlated challenges also reveal their distance to the decision boundary. This leads to a chosen challenge attack on XOR Arbiter PUFs. The effectiveness of our attack is confirmed through PUF simulation and FPGA implementation.
[ { "created": "Sun, 3 Dec 2023 01:39:09 GMT", "version": "v1" } ]
2023-12-05
[ [ "Sayadi", "Niloufar", "" ], [ "Nguyen", "Phuong Ha", "" ], [ "van Dijk", "Marten", "" ], [ "Jin", "Chenglu", "" ] ]
Unreliable XOR Arbiter PUFs were broken by a machine learning attack, which targets the underlying Arbiter PUFs individually. However, reliability information from the PUF was required for this attack. We show that, for the first time, a perfectly reliable XOR Arbiter PUF, where no reliability information is accessible, can be efficiently attacked in the same divide-and-conquer manner. Our key insight is that the responses of correlated challenges also reveal their distance to the decision boundary. This leads to a chosen challenge attack on XOR Arbiter PUFs. The effectiveness of our attack is confirmed through PUF simulation and FPGA implementation.
2204.05575
Haibao Yu
Haibao Yu, Yizhen Luo, Mao Shu, Yiyi Huo, Zebang Yang, Yifeng Shi, Zhenglong Guo, Hanyu Li, Xing Hu, Jirui Yuan, Zaiqing Nie
DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection
CVPR2022
null
null
null
cs.CV cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Autonomous driving faces great safety challenges for a lack of global perspective and the limitation of long-range perception capabilities. It has been widely agreed that vehicle-infrastructure cooperation is required to achieve Level 5 autonomy. However, there is still NO dataset from real scenarios available for computer vision researchers to work on vehicle-infrastructure cooperation-related problems. To accelerate computer vision research and innovation for Vehicle-Infrastructure Cooperative Autonomous Driving (VICAD), we release DAIR-V2X Dataset, which is the first large-scale, multi-modality, multi-view dataset from real scenarios for VICAD. DAIR-V2X comprises 71254 LiDAR frames and 71254 Camera frames, and all frames are captured from real scenes with 3D annotations. The Vehicle-Infrastructure Cooperative 3D Object Detection problem (VIC3D) is introduced, formulating the problem of collaboratively locating and identifying 3D objects using sensory inputs from both vehicle and infrastructure. In addition to solving traditional 3D object detection problems, the solution of VIC3D needs to consider the temporal asynchrony problem between vehicle and infrastructure sensors and the data transmission cost between them. Furthermore, we propose Time Compensation Late Fusion (TCLF), a late fusion framework for the VIC3D task as a benchmark based on DAIR-V2X. Find data, code, and more up-to-date information at https://thudair.baai.ac.cn/index and https://github.com/AIR-THU/DAIR-V2X.
[ { "created": "Tue, 12 Apr 2022 07:13:33 GMT", "version": "v1" } ]
2022-04-13
[ [ "Yu", "Haibao", "" ], [ "Luo", "Yizhen", "" ], [ "Shu", "Mao", "" ], [ "Huo", "Yiyi", "" ], [ "Yang", "Zebang", "" ], [ "Shi", "Yifeng", "" ], [ "Guo", "Zhenglong", "" ], [ "Li", "Hanyu", "" ], [ "Hu", "Xing", "" ], [ "Yuan", "Jirui", "" ], [ "Nie", "Zaiqing", "" ] ]
Autonomous driving faces great safety challenges for a lack of global perspective and the limitation of long-range perception capabilities. It has been widely agreed that vehicle-infrastructure cooperation is required to achieve Level 5 autonomy. However, there is still NO dataset from real scenarios available for computer vision researchers to work on vehicle-infrastructure cooperation-related problems. To accelerate computer vision research and innovation for Vehicle-Infrastructure Cooperative Autonomous Driving (VICAD), we release DAIR-V2X Dataset, which is the first large-scale, multi-modality, multi-view dataset from real scenarios for VICAD. DAIR-V2X comprises 71254 LiDAR frames and 71254 Camera frames, and all frames are captured from real scenes with 3D annotations. The Vehicle-Infrastructure Cooperative 3D Object Detection problem (VIC3D) is introduced, formulating the problem of collaboratively locating and identifying 3D objects using sensory inputs from both vehicle and infrastructure. In addition to solving traditional 3D object detection problems, the solution of VIC3D needs to consider the temporal asynchrony problem between vehicle and infrastructure sensors and the data transmission cost between them. Furthermore, we propose Time Compensation Late Fusion (TCLF), a late fusion framework for the VIC3D task as a benchmark based on DAIR-V2X. Find data, code, and more up-to-date information at https://thudair.baai.ac.cn/index and https://github.com/AIR-THU/DAIR-V2X.
2105.09932
Zhijian Liu
Zhijian Liu, Alexander Amini, Sibo Zhu, Sertac Karaman, Song Han, Daniela Rus
Efficient and Robust LiDAR-Based End-to-End Navigation
ICRA 2021. The first two authors contributed equally to this work. Project page: https://le2ed.mit.edu/
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has been used to demonstrate end-to-end neural network learning for autonomous vehicle control from raw sensory input. While LiDAR sensors provide reliably accurate information, existing end-to-end driving solutions are mainly based on cameras since processing 3D data requires a large memory footprint and computation cost. On the other hand, increasing the robustness of these systems is also critical; however, even estimating the model's uncertainty is very challenging due to the cost of sampling-based methods. In this paper, we present an efficient and robust LiDAR-based end-to-end navigation framework. We first introduce Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design. We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass and then fuses the control predictions intelligently. We evaluate our system on a full-scale vehicle and demonstrate lane-stable as well as navigation capabilities. In the presence of out-of-distribution events (e.g., sensor failures), our system significantly improves robustness and reduces the number of takeovers in the real world.
[ { "created": "Thu, 20 May 2021 17:52:37 GMT", "version": "v1" } ]
2021-05-21
[ [ "Liu", "Zhijian", "" ], [ "Amini", "Alexander", "" ], [ "Zhu", "Sibo", "" ], [ "Karaman", "Sertac", "" ], [ "Han", "Song", "" ], [ "Rus", "Daniela", "" ] ]
Deep learning has been used to demonstrate end-to-end neural network learning for autonomous vehicle control from raw sensory input. While LiDAR sensors provide reliably accurate information, existing end-to-end driving solutions are mainly based on cameras since processing 3D data requires a large memory footprint and computation cost. On the other hand, increasing the robustness of these systems is also critical; however, even estimating the model's uncertainty is very challenging due to the cost of sampling-based methods. In this paper, we present an efficient and robust LiDAR-based end-to-end navigation framework. We first introduce Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design. We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass and then fuses the control predictions intelligently. We evaluate our system on a full-scale vehicle and demonstrate lane-stable as well as navigation capabilities. In the presence of out-of-distribution events (e.g., sensor failures), our system significantly improves robustness and reduces the number of takeovers in the real world.
1712.08409
Nils Bore
Nils Bore, Johan Ekekrantz, Patric Jensfelt, John Folkesson
Detection and Tracking of General Movable Objects in Large 3D Maps
Submitted for peer review
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.
[ { "created": "Fri, 22 Dec 2017 11:53:52 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2018 09:31:47 GMT", "version": "v2" } ]
2018-01-31
[ [ "Bore", "Nils", "" ], [ "Ekekrantz", "Johan", "" ], [ "Jensfelt", "Patric", "" ], [ "Folkesson", "John", "" ] ]
This paper studies the problem of detection and tracking of general objects with long-term dynamics, observed by a mobile robot moving in a large environment. A key problem is that due to the environment scale, it can only observe a subset of the objects at any given time. Since some time passes between observations of objects in different places, the objects might be moved when the robot is not there. We propose a model for this movement in which the objects typically only move locally, but with some small probability they jump longer distances, through what we call global motion. For filtering, we decompose the posterior over local and global movements into two linked processes. The posterior over the global movements and measurement associations is sampled, while we track the local movement analytically using Kalman filters. This novel filter is evaluated on point cloud data gathered autonomously by a mobile robot over an extended period of time. We show that tracking jumping objects is feasible, and that the proposed probabilistic treatment outperforms previous methods when applied to real world data. The key to efficient probabilistic tracking in this scenario is focused sampling of the object posteriors.
2305.13168
Ningyu Zhang
Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, Ningyu Zhang
LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities
Work in progress
null
null
null
cs.CL cs.AI cs.DB cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We engage in experiments across eight diverse datasets, focusing on four representative tasks encompassing entity and relation extraction, event extraction, link prediction, and question-answering, thereby thoroughly exploring LLMs' performance in the domain of construction and inference. Empirically, our findings suggest that LLMs, represented by GPT-4, are more suited as inference assistants rather than few-shot information extractors. Specifically, while GPT-4 exhibits good performance in tasks related to KG construction, it excels further in reasoning tasks, surpassing fine-tuned models in certain cases. Moreover, our investigation extends to the potential generalization ability of LLMs for information extraction, leading to the proposition of a Virtual Knowledge Extraction task and the development of the corresponding VINE dataset. Based on these empirical findings, we further propose AutoKG, a multi-agent-based approach employing LLMs and external sources for KG construction and reasoning. We anticipate that this research can provide invaluable insights for future undertakings in the field of knowledge graphs. The code and datasets are in https://github.com/zjunlp/AutoKG.
[ { "created": "Mon, 22 May 2023 15:56:44 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2024 10:15:25 GMT", "version": "v2" } ]
2024-02-23
[ [ "Zhu", "Yuqi", "" ], [ "Wang", "Xiaohan", "" ], [ "Chen", "Jing", "" ], [ "Qiao", "Shuofei", "" ], [ "Ou", "Yixin", "" ], [ "Yao", "Yunzhi", "" ], [ "Deng", "Shumin", "" ], [ "Chen", "Huajun", "" ], [ "Zhang", "Ningyu", "" ] ]
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We engage in experiments across eight diverse datasets, focusing on four representative tasks encompassing entity and relation extraction, event extraction, link prediction, and question-answering, thereby thoroughly exploring LLMs' performance in the domain of construction and inference. Empirically, our findings suggest that LLMs, represented by GPT-4, are more suited as inference assistants rather than few-shot information extractors. Specifically, while GPT-4 exhibits good performance in tasks related to KG construction, it excels further in reasoning tasks, surpassing fine-tuned models in certain cases. Moreover, our investigation extends to the potential generalization ability of LLMs for information extraction, leading to the proposition of a Virtual Knowledge Extraction task and the development of the corresponding VINE dataset. Based on these empirical findings, we further propose AutoKG, a multi-agent-based approach employing LLMs and external sources for KG construction and reasoning. We anticipate that this research can provide invaluable insights for future undertakings in the field of knowledge graphs. The code and datasets are in https://github.com/zjunlp/AutoKG.
1705.04678
Nikhil Galagali
Nikhil Galagali and Youssef M. Marzouk
Exploiting network topology for large-scale inference of nonlinear reaction models
null
null
null
null
cs.CE q-bio.QM stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The development of chemical reaction models aids understanding and prediction in areas ranging from biology to electrochemistry and combustion. A systematic approach to building reaction network models uses observational data not only to estimate unknown parameters, but also to learn model structure. Bayesian inference provides a natural approach to this data-driven construction of models. Yet traditional Bayesian model inference methodologies that numerically evaluate the evidence for each model are often infeasible for nonlinear reaction network inference, as the number of plausible models can be combinatorially large. Alternative approaches based on model-space sampling can enable large-scale network inference, but their realization presents many challenges. In this paper, we present new computational methods that make large-scale nonlinear network inference tractable. First, we exploit the topology of networks describing potential interactions among chemical species to design improved "between-model" proposals for reversible-jump Markov chain Monte Carlo. Second, we introduce a sensitivity-based determination of move types which, when combined with network-aware proposals, yields significant additional gains in sampling performance. These algorithms are demonstrated on inference problems drawn from systems biology, with nonlinear differential equation models of species interactions.
[ { "created": "Fri, 12 May 2017 17:55:44 GMT", "version": "v1" }, { "created": "Thu, 19 Jul 2018 18:35:07 GMT", "version": "v2" }, { "created": "Sun, 14 Oct 2018 16:11:46 GMT", "version": "v3" }, { "created": "Tue, 16 Oct 2018 01:26:55 GMT", "version": "v4" }, { "created": "Sat, 19 Jan 2019 03:43:48 GMT", "version": "v5" } ]
2019-01-23
[ [ "Galagali", "Nikhil", "" ], [ "Marzouk", "Youssef M.", "" ] ]
The development of chemical reaction models aids understanding and prediction in areas ranging from biology to electrochemistry and combustion. A systematic approach to building reaction network models uses observational data not only to estimate unknown parameters, but also to learn model structure. Bayesian inference provides a natural approach to this data-driven construction of models. Yet traditional Bayesian model inference methodologies that numerically evaluate the evidence for each model are often infeasible for nonlinear reaction network inference, as the number of plausible models can be combinatorially large. Alternative approaches based on model-space sampling can enable large-scale network inference, but their realization presents many challenges. In this paper, we present new computational methods that make large-scale nonlinear network inference tractable. First, we exploit the topology of networks describing potential interactions among chemical species to design improved "between-model" proposals for reversible-jump Markov chain Monte Carlo. Second, we introduce a sensitivity-based determination of move types which, when combined with network-aware proposals, yields significant additional gains in sampling performance. These algorithms are demonstrated on inference problems drawn from systems biology, with nonlinear differential equation models of species interactions.
1806.09171
Vitaly Petrov
Vitaly Petrov, Sergey Andreev, Mario Gerla, Yevgeni Koucheryavy
Breaking the Limits in Urban Video Monitoring: Massive Crowd Sourced Surveillance over Vehicles
8 pages, 5 figures, accepted to IEEE Wireless Communications, 2019
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contemporary urban environments are in prompt need of means for intelligent decision-making, where a crucial role belongs to smart video surveillance systems. While existing deployments of stationary monitoring cameras already deliver notable societal benefits, the proposed concept of massive video surveillance over connected vehicles that we contribute in this paper may further augment these important capabilities. We therefore introduce the envisioned system concept, discuss its implementation, outline the high-level architecture, and identify major data flows, while also offering insights into the corresponding design and deployment aspects. Our conducted case study confirms the potential of the described crowd sourced vehicular system to effectively complement and eventually surpass even the best of today's static video surveillance setups. We expect that our proposal will become of value and integrate seamlessly into the future Internet-of-Things landscape, thus enabling a plethora of advanced urban applications.
[ { "created": "Sun, 24 Jun 2018 16:28:22 GMT", "version": "v1" } ]
2018-06-26
[ [ "Petrov", "Vitaly", "" ], [ "Andreev", "Sergey", "" ], [ "Gerla", "Mario", "" ], [ "Koucheryavy", "Yevgeni", "" ] ]
Contemporary urban environments are in prompt need of means for intelligent decision-making, where a crucial role belongs to smart video surveillance systems. While existing deployments of stationary monitoring cameras already deliver notable societal benefits, the proposed concept of massive video surveillance over connected vehicles that we contribute in this paper may further augment these important capabilities. We therefore introduce the envisioned system concept, discuss its implementation, outline the high-level architecture, and identify major data flows, while also offering insights into the corresponding design and deployment aspects. Our conducted case study confirms the potential of the described crowd sourced vehicular system to effectively complement and eventually surpass even the best of today's static video surveillance setups. We expect that our proposal will become of value and integrate seamlessly into the future Internet-of-Things landscape, thus enabling a plethora of advanced urban applications.
2105.14478
Yian Li
Yian Li, Hai Zhao
Pre-training Universal Language Representation
Accepted by ACL-IJCNLP 2021 main conference
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units. This work introduces universal language representation learning, i.e., embeddings of different levels of linguistic units or text with quite diverse lengths in a uniform vector space. We propose the training objective MiSAD that utilizes meaningful n-grams extracted from large unlabeled corpus by a simple but effective algorithm for pre-trained language models. Then we empirically verify that well designed pre-training scheme may effectively yield universal language representation, which will bring great convenience when handling multiple layers of linguistic objects in a unified way. Especially, our model achieves the highest accuracy on analogy tasks in different language levels and significantly improves the performance on downstream tasks in the GLUE benchmark and a question answering dataset.
[ { "created": "Sun, 30 May 2021 09:29:01 GMT", "version": "v1" } ]
2021-06-01
[ [ "Li", "Yian", "" ], [ "Zhao", "Hai", "" ] ]
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units. This work introduces universal language representation learning, i.e., embeddings of different levels of linguistic units or text with quite diverse lengths in a uniform vector space. We propose the training objective MiSAD that utilizes meaningful n-grams extracted from large unlabeled corpus by a simple but effective algorithm for pre-trained language models. Then we empirically verify that well designed pre-training scheme may effectively yield universal language representation, which will bring great convenience when handling multiple layers of linguistic objects in a unified way. Especially, our model achieves the highest accuracy on analogy tasks in different language levels and significantly improves the performance on downstream tasks in the GLUE benchmark and a question answering dataset.
2305.06900
Huzaifa Mustafa Unjhawala
Huzaifa Mustafa Unjhawala, Ruochun Zhang, Wei Hu, Jinlong Wu, Radu Serban, Dan Negrut
Using a Bayesian-Inference Approach to Calibrating Models for Simulation in Robotics
19 pages, 42 figures
061004-18 / Vol. 18, JUNE 2023
10.1115/1.4062199
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In robotics, simulation has the potential to reduce design time and costs, and lead to a more robust engineered solution and a safer development process. However, the use of simulators is predicated on the availability of good models. This contribution is concerned with improving the quality of these models via calibration, which is cast herein in a Bayesian framework. First, we discuss the Bayesian machinery involved in model calibration. Then, we demonstrate it in one example: calibration of a vehicle dynamics model that has low degree of freedom count and can be used for state estimation, model predictive control, or path planning. A high fidelity simulator is used to emulate the ``experiments'' and generate the data for the calibration. The merit of this work is not tied to a new Bayesian methodology for calibration, but to the demonstration of how the Bayesian machinery can establish connections among models in computational dynamics, even when the data in use is noisy. The software used to generate the results reported herein is available in a public repository for unfettered use and distribution.
[ { "created": "Thu, 11 May 2023 15:41:59 GMT", "version": "v1" } ]
2023-05-12
[ [ "Unjhawala", "Huzaifa Mustafa", "" ], [ "Zhang", "Ruochun", "" ], [ "Hu", "Wei", "" ], [ "Wu", "Jinlong", "" ], [ "Serban", "Radu", "" ], [ "Negrut", "Dan", "" ] ]
In robotics, simulation has the potential to reduce design time and costs, and lead to a more robust engineered solution and a safer development process. However, the use of simulators is predicated on the availability of good models. This contribution is concerned with improving the quality of these models via calibration, which is cast herein in a Bayesian framework. First, we discuss the Bayesian machinery involved in model calibration. Then, we demonstrate it in one example: calibration of a vehicle dynamics model that has low degree of freedom count and can be used for state estimation, model predictive control, or path planning. A high fidelity simulator is used to emulate the ``experiments'' and generate the data for the calibration. The merit of this work is not tied to a new Bayesian methodology for calibration, but to the demonstration of how the Bayesian machinery can establish connections among models in computational dynamics, even when the data in use is noisy. The software used to generate the results reported herein is available in a public repository for unfettered use and distribution.
2111.04382
Raghavendra Sridharamurthy
Raghavendra Sridharamurthy and Vijay Natarajan
Comparative Analysis of Merge Trees using Local Tree Edit Distance
null
IEEE Transactions on Visualization and Computer Graphics, 29 (2), 2023, 1518--1530
10.1109/TVCG.2021.3122176
null
cs.GR cs.CG
http://creativecommons.org/licenses/by/4.0/
Comparative analysis of scalar fields is an important problem with various applications including feature-directed visualization and feature tracking in time-varying data. Comparing topological structures that are abstract and succinct representations of the scalar fields lead to faster and meaningful comparison. While there are many distance or similarity measures to compare topological structures in a global context, there are no known measures for comparing topological structures locally. While the global measures have many applications, they do not directly lend themselves to fine-grained analysis across multiple scales. We define a local variant of the tree edit distance and apply it towards local comparative analysis of merge trees with support for finer analysis. We also present experimental results on time-varying scalar fields, 3D cryo-electron microscopy data, and other synthetic data sets to show the utility of this approach in applications like symmetry detection and feature tracking.
[ { "created": "Mon, 8 Nov 2021 11:02:36 GMT", "version": "v1" } ]
2024-06-06
[ [ "Sridharamurthy", "Raghavendra", "" ], [ "Natarajan", "Vijay", "" ] ]
Comparative analysis of scalar fields is an important problem with various applications including feature-directed visualization and feature tracking in time-varying data. Comparing topological structures that are abstract and succinct representations of the scalar fields lead to faster and meaningful comparison. While there are many distance or similarity measures to compare topological structures in a global context, there are no known measures for comparing topological structures locally. While the global measures have many applications, they do not directly lend themselves to fine-grained analysis across multiple scales. We define a local variant of the tree edit distance and apply it towards local comparative analysis of merge trees with support for finer analysis. We also present experimental results on time-varying scalar fields, 3D cryo-electron microscopy data, and other synthetic data sets to show the utility of this approach in applications like symmetry detection and feature tracking.
1704.05617
Chun-Nan Hsu
Sanjeev Shenoy, Tsung-Ting Kuo, Rodney Gabriel, Julian McAuley and Chun-Nan Hsu
Deduplication in a massive clinical note dataset
Extended from the Master project report of Sanjeev Shenoy, Department of Computer Science and Engineering, University of California, San Diego. June 2016
null
null
null
cs.DB cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Duplication, whether exact or partial, is a common issue in many datasets. In clinical notes data, duplication (and near duplication) can arise for many reasons, such as the pervasive use of templates, copy-pasting, or notes being generated by automated procedures. A key challenge in removing such near duplicates is the size of such datasets; our own dataset consists of more than 10 million notes. To detect and correct such duplicates requires algorithms that both accurate and highly scalable. We describe a solution based on Minhashing with Locality Sensitive Hashing. In this paper, we present the theory behind this method and present a database-inspired approach to make the method scalable. We also present a clustering technique using disjoint sets to produce dense clusters, which speeds up our algorithm.
[ { "created": "Wed, 19 Apr 2017 05:33:21 GMT", "version": "v1" } ]
2017-04-20
[ [ "Shenoy", "Sanjeev", "" ], [ "Kuo", "Tsung-Ting", "" ], [ "Gabriel", "Rodney", "" ], [ "McAuley", "Julian", "" ], [ "Hsu", "Chun-Nan", "" ] ]
Duplication, whether exact or partial, is a common issue in many datasets. In clinical notes data, duplication (and near duplication) can arise for many reasons, such as the pervasive use of templates, copy-pasting, or notes being generated by automated procedures. A key challenge in removing such near duplicates is the size of such datasets; our own dataset consists of more than 10 million notes. To detect and correct such duplicates requires algorithms that both accurate and highly scalable. We describe a solution based on Minhashing with Locality Sensitive Hashing. In this paper, we present the theory behind this method and present a database-inspired approach to make the method scalable. We also present a clustering technique using disjoint sets to produce dense clusters, which speeds up our algorithm.
2404.03943
Chen Wang
Chen Wang, Haoxiang Luo, Kun Zhang, Hua Chen, Jia Pan, Wei Zhang
POMDP-Guided Active Force-Based Search for Robotic Insertion
null
null
10.1109/IROS55552.2023.10342421
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In robotic insertion tasks where the uncertainty exceeds the allowable tolerance, a good search strategy is essential for successful insertion and significantly influences efficiency. The commonly used blind search method is time-consuming and does not exploit the rich contact information. In this paper, we propose a novel search strategy that actively utilizes the information contained in the contact configuration and shows high efficiency. In particular, we formulate this problem as a Partially Observable Markov Decision Process (POMDP) with carefully designed primitives based on an in-depth analysis of the contact configuration's static stability. From the formulated POMDP, we can derive a novel search strategy. Thanks to its simplicity, this search strategy can be incorporated into a Finite-State-Machine (FSM) controller. The behaviors of the FSM controller are realized through a low-level Cartesian Impedance Controller. Our method is based purely on the robot's proprioceptive sensing and does not need visual or tactile sensors. To evaluate the effectiveness of our proposed strategy and control framework, we conduct extensive comparison experiments in simulation, where we compare our method with the baseline approach. The results demonstrate that our proposed method achieves a higher success rate with a shorter search time and search trajectory length compared to the baseline method. Additionally, we show that our method is robust to various initial displacement errors.
[ { "created": "Fri, 5 Apr 2024 08:17:03 GMT", "version": "v1" } ]
2024-04-08
[ [ "Wang", "Chen", "" ], [ "Luo", "Haoxiang", "" ], [ "Zhang", "Kun", "" ], [ "Chen", "Hua", "" ], [ "Pan", "Jia", "" ], [ "Zhang", "Wei", "" ] ]
In robotic insertion tasks where the uncertainty exceeds the allowable tolerance, a good search strategy is essential for successful insertion and significantly influences efficiency. The commonly used blind search method is time-consuming and does not exploit the rich contact information. In this paper, we propose a novel search strategy that actively utilizes the information contained in the contact configuration and shows high efficiency. In particular, we formulate this problem as a Partially Observable Markov Decision Process (POMDP) with carefully designed primitives based on an in-depth analysis of the contact configuration's static stability. From the formulated POMDP, we can derive a novel search strategy. Thanks to its simplicity, this search strategy can be incorporated into a Finite-State-Machine (FSM) controller. The behaviors of the FSM controller are realized through a low-level Cartesian Impedance Controller. Our method is based purely on the robot's proprioceptive sensing and does not need visual or tactile sensors. To evaluate the effectiveness of our proposed strategy and control framework, we conduct extensive comparison experiments in simulation, where we compare our method with the baseline approach. The results demonstrate that our proposed method achieves a higher success rate with a shorter search time and search trajectory length compared to the baseline method. Additionally, we show that our method is robust to various initial displacement errors.
2110.00480
Yifan Song
Kevin K\"oser, Yifan Song, Lasse Petersen, Emanuel Wenzlaff, Felix Woelk
Robustly Removing Deep Sea Lighting Effects for Visual Mapping of Abyssal Plains
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The majority of Earth's surface lies deep in the oceans, where no surface light reaches. Robots diving down to great depths must bring light sources that create moving illumination patterns in the darkness, such that the same 3D point appears with different color in each image. On top, scattering and attenuation of light in the water makes images appear foggy and typically blueish, the degradation depending on each pixel's distance to its observed seafloor patch, on the local composition of the water and the relative poses and cones of the light sources. Consequently, visual mapping, including image matching and surface albedo estimation, severely suffers from the effects that co-moving light sources produce, and larger mosaic maps from photos are often dominated by lighting effects that obscure the actual seafloor structure. In this contribution a practical approach to estimating and compensating these lighting effects on predominantly homogeneous, flat seafloor regions, as can be found in the Abyssal plains of our oceans, is presented. The method is essentially parameter-free and intended as a preprocessing step to facilitate visual mapping, but already produces convincing lighting artefact compensation up to a global white balance factor. It does not require to be trained beforehand on huge sets of annotated images, which are not available for the deep sea. Rather, we motivate our work by physical models of light propagation, perform robust statistics-based estimates of additive and multiplicative nuisances that avoid explicit parameters for light, camera, water or scene, discuss the breakdown point of the algorithms and show results on imagery captured by robots in several kilometer water depth.
[ { "created": "Fri, 1 Oct 2021 15:28:07 GMT", "version": "v1" } ]
2021-10-04
[ [ "Köser", "Kevin", "" ], [ "Song", "Yifan", "" ], [ "Petersen", "Lasse", "" ], [ "Wenzlaff", "Emanuel", "" ], [ "Woelk", "Felix", "" ] ]
The majority of Earth's surface lies deep in the oceans, where no surface light reaches. Robots diving down to great depths must bring light sources that create moving illumination patterns in the darkness, such that the same 3D point appears with different color in each image. On top, scattering and attenuation of light in the water makes images appear foggy and typically blueish, the degradation depending on each pixel's distance to its observed seafloor patch, on the local composition of the water and the relative poses and cones of the light sources. Consequently, visual mapping, including image matching and surface albedo estimation, severely suffers from the effects that co-moving light sources produce, and larger mosaic maps from photos are often dominated by lighting effects that obscure the actual seafloor structure. In this contribution a practical approach to estimating and compensating these lighting effects on predominantly homogeneous, flat seafloor regions, as can be found in the Abyssal plains of our oceans, is presented. The method is essentially parameter-free and intended as a preprocessing step to facilitate visual mapping, but already produces convincing lighting artefact compensation up to a global white balance factor. It does not require to be trained beforehand on huge sets of annotated images, which are not available for the deep sea. Rather, we motivate our work by physical models of light propagation, perform robust statistics-based estimates of additive and multiplicative nuisances that avoid explicit parameters for light, camera, water or scene, discuss the breakdown point of the algorithms and show results on imagery captured by robots in several kilometer water depth.
2406.20092
Jie-Neng Chen
Jieneng Chen, Luoxin Ye, Ju He, Zhao-Yang Wang, Daniel Khashabi, Alan Yuille
LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression
Code is available at https://github.com/Beckschen/LLaVolta
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While significant advancements have been made in compressed representations for text embeddings in large language models (LLMs), the compression of visual tokens in large multi-modal models (LMMs) has remained a largely overlooked area. In this work, we present the study on the analysis of redundancy concerning visual tokens and efficient training within these models. Our initial experiments show that eliminating up to 70% of visual tokens at the testing stage by simply average pooling only leads to a minimal 3% reduction in visual question answering accuracy on the GQA benchmark, indicating significant redundancy in visual context. Addressing this, we introduce Visual Context Compressor, which reduces the number of visual tokens during training to enhance training efficiency without sacrificing performance. To minimize information loss caused by the compression on visual tokens while maintaining training efficiency, we develop LLaVolta as a lite training scheme. LLaVolta incorporates stage-wise visual context compression to progressively compress the visual tokens from heavily to lightly, and finally no compression at the end of training, yielding no loss of information when testing. Extensive experiments demonstrate that our approach enhances the performance of MLLMs in both image-language and video-language understanding, while also significantly cutting training costs. Code is available at https://github.com/Beckschen/LLaVolta
[ { "created": "Fri, 28 Jun 2024 17:57:14 GMT", "version": "v1" } ]
2024-07-01
[ [ "Chen", "Jieneng", "" ], [ "Ye", "Luoxin", "" ], [ "He", "Ju", "" ], [ "Wang", "Zhao-Yang", "" ], [ "Khashabi", "Daniel", "" ], [ "Yuille", "Alan", "" ] ]
While significant advancements have been made in compressed representations for text embeddings in large language models (LLMs), the compression of visual tokens in large multi-modal models (LMMs) has remained a largely overlooked area. In this work, we present the study on the analysis of redundancy concerning visual tokens and efficient training within these models. Our initial experiments show that eliminating up to 70% of visual tokens at the testing stage by simply average pooling only leads to a minimal 3% reduction in visual question answering accuracy on the GQA benchmark, indicating significant redundancy in visual context. Addressing this, we introduce Visual Context Compressor, which reduces the number of visual tokens during training to enhance training efficiency without sacrificing performance. To minimize information loss caused by the compression on visual tokens while maintaining training efficiency, we develop LLaVolta as a lite training scheme. LLaVolta incorporates stage-wise visual context compression to progressively compress the visual tokens from heavily to lightly, and finally no compression at the end of training, yielding no loss of information when testing. Extensive experiments demonstrate that our approach enhances the performance of MLLMs in both image-language and video-language understanding, while also significantly cutting training costs. Code is available at https://github.com/Beckschen/LLaVolta
2101.11939
Wenguan Wang
Wenguan Wang, Tianfei Zhou, Fisher Yu, Jifeng Dai, Ender Konukoglu, Luc Van Gool
Exploring Cross-Image Pixel Contrast for Semantic Segmentation
Our code will be available at https://github.com/tfzhou/ContrastiveSeg
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current semantic segmentation methods focus only on mining "local" context, i.e., dependencies between pixels within individual images, by context-aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization criteria (e.g., IoU-like loss). However, they ignore "global" context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely explored before. Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HR-Net), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCAL-Context, COCO-Stuff, CamVid). We expect this work will encourage our community to rethink the current de facto training paradigm in fully supervised semantic segmentation.
[ { "created": "Thu, 28 Jan 2021 11:35:32 GMT", "version": "v1" }, { "created": "Sat, 30 Jan 2021 23:41:45 GMT", "version": "v2" }, { "created": "Thu, 11 Feb 2021 20:35:21 GMT", "version": "v3" }, { "created": "Tue, 30 Mar 2021 15:16:23 GMT", "version": "v4" } ]
2021-03-31
[ [ "Wang", "Wenguan", "" ], [ "Zhou", "Tianfei", "" ], [ "Yu", "Fisher", "" ], [ "Dai", "Jifeng", "" ], [ "Konukoglu", "Ender", "" ], [ "Van Gool", "Luc", "" ] ]
Current semantic segmentation methods focus only on mining "local" context, i.e., dependencies between pixels within individual images, by context-aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization criteria (e.g., IoU-like loss). However, they ignore "global" context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely explored before. Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HR-Net), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCAL-Context, COCO-Stuff, CamVid). We expect this work will encourage our community to rethink the current de facto training paradigm in fully supervised semantic segmentation.
1205.5199
Ashwin Ganesan
Ashwin Ganesan
Automorphism groups of Cayley graphs generated by connected transposition sets
null
Discrete Mathematics, vol. 313, no. 21, pp. 2482-2485, November 2013
10.1016/j.disc.2013.07.013
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Let $S$ be a set of transpositions that generates the symmetric group $S_n$, where $n \ge 3$. The transposition graph $T(S)$ is defined to be the graph with vertex set $\{1,\ldots,n\}$ and with vertices $i$ and $j$ being adjacent in $T(S)$ whenever $(i,j) \in S$. We prove that if the girth of the transposition graph $T(S)$ is at least 5, then the automorphism group of the Cayley graph $\Cay(S_n,S)$ is the semidirect product $R(S_n) \rtimes \Aut(S_n,S)$, where $\Aut(S_n,S)$ is the set of automorphisms of $S_n$ that fixes $S$. This strengthens a result of Feng on transposition graphs that are trees. We also prove that if the transposition graph $T(S)$ is a 4-cycle, then the set of automorphisms of the Cayley graph $\Cay(S_4,S)$ that fixes a vertex and each of its neighbors is isomorphic to the Klein 4-group and hence is nontrivial. We thus identify the existence of 4-cycles in the transposition graph as being an important factor in causing a potentially larger automorphism group of the Cayley graph.
[ { "created": "Wed, 23 May 2012 15:20:17 GMT", "version": "v1" }, { "created": "Wed, 5 Sep 2012 18:24:08 GMT", "version": "v2" }, { "created": "Sat, 1 Dec 2012 19:55:22 GMT", "version": "v3" }, { "created": "Sun, 23 Jun 2013 12:48:04 GMT", "version": "v4" } ]
2015-12-11
[ [ "Ganesan", "Ashwin", "" ] ]
Let $S$ be a set of transpositions that generates the symmetric group $S_n$, where $n \ge 3$. The transposition graph $T(S)$ is defined to be the graph with vertex set $\{1,\ldots,n\}$ and with vertices $i$ and $j$ being adjacent in $T(S)$ whenever $(i,j) \in S$. We prove that if the girth of the transposition graph $T(S)$ is at least 5, then the automorphism group of the Cayley graph $\Cay(S_n,S)$ is the semidirect product $R(S_n) \rtimes \Aut(S_n,S)$, where $\Aut(S_n,S)$ is the set of automorphisms of $S_n$ that fixes $S$. This strengthens a result of Feng on transposition graphs that are trees. We also prove that if the transposition graph $T(S)$ is a 4-cycle, then the set of automorphisms of the Cayley graph $\Cay(S_4,S)$ that fixes a vertex and each of its neighbors is isomorphic to the Klein 4-group and hence is nontrivial. We thus identify the existence of 4-cycles in the transposition graph as being an important factor in causing a potentially larger automorphism group of the Cayley graph.
1905.06109
Xiaosen Wang
Kun He and Wu Wang and Xiaosen Wang and John E. Hopcroft
A New Anchor Word Selection Method for the Separable Topic Discovery
18 pages, 4 figures
null
null
null
cs.IR cs.CL cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Separable Non-negative Matrix Factorization (SNMF) is an important method for topic modeling, where "separable" assumes every topic contains at least one anchor word, defined as a word that has non-zero probability only on that topic. SNMF focuses on the word co-occurrence patterns to reveal topics by two steps: anchor word selection and topic recovery. The quality of the anchor words strongly influences the quality of the extracted topics. Existing anchor word selection algorithm is to greedily find an approximate convex hull in a high-dimensional word co-occurrence space. In this work, we propose a new method for the anchor word selection by associating the word co-occurrence probability with the words similarity and assuming that the most different words on semantic are potential candidates for the anchor words. Therefore, if the similarity of a word-pair is very low, then the two words are very likely to be the anchor words. According to the statistical information of text corpora, we can get the similarity of all word-pairs. We build the word similarity graph where the nodes correspond to words and weights on edges stand for the word-pair similarity. Following this way, we design a greedy method to find a minimum edge-weight anchor clique of a given size in the graph for the anchor word selection. Extensive experiments on real-world corpus demonstrate the effectiveness of the proposed anchor word selection method that outperforms the common convex hull-based methods on the revealed topic quality. Meanwhile, our method is much faster than typical SNMF based method.
[ { "created": "Fri, 10 May 2019 12:16:10 GMT", "version": "v1" } ]
2019-05-16
[ [ "He", "Kun", "" ], [ "Wang", "Wu", "" ], [ "Wang", "Xiaosen", "" ], [ "Hopcroft", "John E.", "" ] ]
Separable Non-negative Matrix Factorization (SNMF) is an important method for topic modeling, where "separable" assumes every topic contains at least one anchor word, defined as a word that has non-zero probability only on that topic. SNMF focuses on the word co-occurrence patterns to reveal topics by two steps: anchor word selection and topic recovery. The quality of the anchor words strongly influences the quality of the extracted topics. Existing anchor word selection algorithm is to greedily find an approximate convex hull in a high-dimensional word co-occurrence space. In this work, we propose a new method for the anchor word selection by associating the word co-occurrence probability with the words similarity and assuming that the most different words on semantic are potential candidates for the anchor words. Therefore, if the similarity of a word-pair is very low, then the two words are very likely to be the anchor words. According to the statistical information of text corpora, we can get the similarity of all word-pairs. We build the word similarity graph where the nodes correspond to words and weights on edges stand for the word-pair similarity. Following this way, we design a greedy method to find a minimum edge-weight anchor clique of a given size in the graph for the anchor word selection. Extensive experiments on real-world corpus demonstrate the effectiveness of the proposed anchor word selection method that outperforms the common convex hull-based methods on the revealed topic quality. Meanwhile, our method is much faster than typical SNMF based method.
1201.2531
Gergely Acs
Gergely Acs and Claude Castelluccia
DREAM: DiffeRentially privatE smArt Metering
Shorter version appeared on Information Hiding Conference 2011
null
null
null
cs.CR
http://creativecommons.org/licenses/publicdomain/
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
[ { "created": "Thu, 12 Jan 2012 11:15:02 GMT", "version": "v1" } ]
2012-01-13
[ [ "Acs", "Gergely", "" ], [ "Castelluccia", "Claude", "" ] ]
This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees. With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics while learning only limited information about the activities of individual households. For example, a supplier cannot tell from a user's trace when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.
2012.13196
Daniel O'Connor
Daniel O'Connor, Walter Vinci
RBM-Flow and D-Flow: Invertible Flows with Discrete Energy Base Spaces
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient sampling of complex data distributions can be achieved using trained invertible flows (IF), where the model distribution is generated by pushing a simple base distribution through multiple non-linear bijective transformations. However, the iterative nature of the transformations in IFs can limit the approximation to the target distribution. In this paper we seek to mitigate this by implementing RBM-Flow, an IF model whose base distribution is a Restricted Boltzmann Machine (RBM) with a continuous smoothing applied. We show that by using RBM-Flow we are able to improve the quality of samples generated, quantified by the Inception Scores (IS) and Frechet Inception Distance (FID), over baseline models with the same IF transformations, but with less expressive base distributions. Furthermore, we also obtain D-Flow, an IF model with uncorrelated discrete latent variables. We show that D-Flow achieves similar likelihoods and FID/IS scores to those of a typical IF with Gaussian base variables, but with the additional benefit that global features are meaningfully encoded as discrete labels in the latent space.
[ { "created": "Thu, 24 Dec 2020 11:05:27 GMT", "version": "v1" }, { "created": "Thu, 28 Jan 2021 16:03:39 GMT", "version": "v2" }, { "created": "Mon, 12 Jul 2021 10:00:47 GMT", "version": "v3" } ]
2021-07-13
[ [ "O'Connor", "Daniel", "" ], [ "Vinci", "Walter", "" ] ]
Efficient sampling of complex data distributions can be achieved using trained invertible flows (IF), where the model distribution is generated by pushing a simple base distribution through multiple non-linear bijective transformations. However, the iterative nature of the transformations in IFs can limit the approximation to the target distribution. In this paper we seek to mitigate this by implementing RBM-Flow, an IF model whose base distribution is a Restricted Boltzmann Machine (RBM) with a continuous smoothing applied. We show that by using RBM-Flow we are able to improve the quality of samples generated, quantified by the Inception Scores (IS) and Frechet Inception Distance (FID), over baseline models with the same IF transformations, but with less expressive base distributions. Furthermore, we also obtain D-Flow, an IF model with uncorrelated discrete latent variables. We show that D-Flow achieves similar likelihoods and FID/IS scores to those of a typical IF with Gaussian base variables, but with the additional benefit that global features are meaningfully encoded as discrete labels in the latent space.
1909.10407
Mandar Gogate
Mandar Gogate, Kia Dashtipour, Ahsan Adeel, Amir Hussain
CochleaNet: A Robust Language-independent Audio-Visual Model for Speech Enhancement
34 pages, 11 figures, Submitted to Information Fusion
null
null
null
cs.SD cs.CV cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Noisy situations cause huge problems for suffers of hearing loss as hearing aids often make the signal more audible but do not always restore the intelligibility. In noisy settings, humans routinely exploit the audio-visual (AV) nature of the speech to selectively suppress the background noise and to focus on the target speaker. In this paper, we present a causal, language, noise and speaker independent AV deep neural network (DNN) architecture for speech enhancement (SE). The model exploits the noisy acoustic cues and noise robust visual cues to focus on the desired speaker and improve the speech intelligibility. To evaluate the proposed SE framework a first of its kind AV binaural speech corpus, called ASPIRE, is recorded in real noisy environments including cafeteria and restaurant. We demonstrate superior performance of our approach in terms of objective measures and subjective listening tests over the state-of-the-art SE approaches as well as recent DNN based SE models. In addition, our work challenges a popular belief that a scarcity of multi-language large vocabulary AV corpus and wide variety of noises is a major bottleneck to build a robust language, speaker and noise independent SE systems. We show that a model trained on synthetic mixture of Grid corpus (with 33 speakers and a small English vocabulary) and ChiME 3 Noises (consisting of only bus, pedestrian, cafeteria, and street noises) generalise well not only on large vocabulary corpora but also on completely unrelated languages (such as Mandarin), wide variety of speakers and noises.
[ { "created": "Mon, 23 Sep 2019 14:59:47 GMT", "version": "v1" } ]
2019-09-24
[ [ "Gogate", "Mandar", "" ], [ "Dashtipour", "Kia", "" ], [ "Adeel", "Ahsan", "" ], [ "Hussain", "Amir", "" ] ]
Noisy situations cause huge problems for suffers of hearing loss as hearing aids often make the signal more audible but do not always restore the intelligibility. In noisy settings, humans routinely exploit the audio-visual (AV) nature of the speech to selectively suppress the background noise and to focus on the target speaker. In this paper, we present a causal, language, noise and speaker independent AV deep neural network (DNN) architecture for speech enhancement (SE). The model exploits the noisy acoustic cues and noise robust visual cues to focus on the desired speaker and improve the speech intelligibility. To evaluate the proposed SE framework a first of its kind AV binaural speech corpus, called ASPIRE, is recorded in real noisy environments including cafeteria and restaurant. We demonstrate superior performance of our approach in terms of objective measures and subjective listening tests over the state-of-the-art SE approaches as well as recent DNN based SE models. In addition, our work challenges a popular belief that a scarcity of multi-language large vocabulary AV corpus and wide variety of noises is a major bottleneck to build a robust language, speaker and noise independent SE systems. We show that a model trained on synthetic mixture of Grid corpus (with 33 speakers and a small English vocabulary) and ChiME 3 Noises (consisting of only bus, pedestrian, cafeteria, and street noises) generalise well not only on large vocabulary corpora but also on completely unrelated languages (such as Mandarin), wide variety of speakers and noises.
1205.7031
Fabian Schuh
Fabian Schuh, Johannes B. Huber
Nonlinear Trellis Description for Convolutionally Encoded Transmission Over ISI-channels with Applications for CPM
6 pages, 13 figures, submitted for IEEE-SCC-13
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper we propose a matched decoding scheme for convolutionally encoded transmission over intersymbol interference (ISI) channels and devise a nonlinear trellis description. As an application we show that for coded continuous phase modulation (CPM) using a non-coherent receiver the number of states of the super trellis can be significantly reduced by means of a matched non-linear trellis encoder.
[ { "created": "Thu, 31 May 2012 16:19:28 GMT", "version": "v1" }, { "created": "Wed, 1 Aug 2012 10:55:22 GMT", "version": "v2" } ]
2012-08-02
[ [ "Schuh", "Fabian", "" ], [ "Huber", "Johannes B.", "" ] ]
In this paper we propose a matched decoding scheme for convolutionally encoded transmission over intersymbol interference (ISI) channels and devise a nonlinear trellis description. As an application we show that for coded continuous phase modulation (CPM) using a non-coherent receiver the number of states of the super trellis can be significantly reduced by means of a matched non-linear trellis encoder.
2006.14765
Tingmin Wu
Tingmin Wu, Wanlun Ma, Sheng Wen, Xin Xia, Cecile Paris, Surya Nepal, Yang Xiang
Analysis of Trending Topics and Text-based Channels of Information Delivery in Cybersecurity
13 pages (main content) + 4 pages (references and appendix)
null
null
null
cs.CR cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer users are generally faced with difficulties in making correct security decisions. While an increasingly fewer number of people are trying or willing to take formal security training, online sources including news, security blogs, and websites are continuously making security knowledge more accessible. Analysis of cybersecurity texts can provide insights into the trending topics and identify current security issues as well as how cyber attacks evolve over time. These in turn can support researchers and practitioners in predicting and preparing for these attacks. Comparing different sources may facilitate the learning process for normal users by persisting the security knowledge gained from different cybersecurity context. Prior studies neither systematically analysed the wide-range of digital sources nor provided any standardisation in analysing the trending topics from recent security texts. Although LDA has been widely adopted in topic generation, its generated topics cannot cover the cybersecurity concepts completely and considerably overlap. To address this issue, we propose a semi-automated classification method to generate comprehensive security categories instead of LDA-generated topics. We further compare the identified 16 security categories across different sources based on their popularity and impact. We have revealed several surprising findings. (1) The impact reflected from cyber-security texts strongly correlates with the monetary loss caused by cybercrimes. (2) For most categories, security blogs share the largest popularity and largest absolute/relative impact over time. (3) Websites deliver security information without caring about timeliness much, where one third of the articles do not specify the date and the rest have a time lag in posting emerging security issues.
[ { "created": "Fri, 26 Jun 2020 03:00:04 GMT", "version": "v1" } ]
2020-06-29
[ [ "Wu", "Tingmin", "" ], [ "Ma", "Wanlun", "" ], [ "Wen", "Sheng", "" ], [ "Xia", "Xin", "" ], [ "Paris", "Cecile", "" ], [ "Nepal", "Surya", "" ], [ "Xiang", "Yang", "" ] ]
Computer users are generally faced with difficulties in making correct security decisions. While an increasingly fewer number of people are trying or willing to take formal security training, online sources including news, security blogs, and websites are continuously making security knowledge more accessible. Analysis of cybersecurity texts can provide insights into the trending topics and identify current security issues as well as how cyber attacks evolve over time. These in turn can support researchers and practitioners in predicting and preparing for these attacks. Comparing different sources may facilitate the learning process for normal users by persisting the security knowledge gained from different cybersecurity context. Prior studies neither systematically analysed the wide-range of digital sources nor provided any standardisation in analysing the trending topics from recent security texts. Although LDA has been widely adopted in topic generation, its generated topics cannot cover the cybersecurity concepts completely and considerably overlap. To address this issue, we propose a semi-automated classification method to generate comprehensive security categories instead of LDA-generated topics. We further compare the identified 16 security categories across different sources based on their popularity and impact. We have revealed several surprising findings. (1) The impact reflected from cyber-security texts strongly correlates with the monetary loss caused by cybercrimes. (2) For most categories, security blogs share the largest popularity and largest absolute/relative impact over time. (3) Websites deliver security information without caring about timeliness much, where one third of the articles do not specify the date and the rest have a time lag in posting emerging security issues.
2207.08803
Hashmat Shadab Malik
Hashmat Shadab Malik, Shahina K Kunhimon, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan
Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations
Accepted at BMVC'22 (Oral)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models. Therefore, these attacks are restricted by the availability of an effective surrogate model. In this work, we relax this assumption and propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch under the condition of no labels and few data samples. Our training approach is based on a min-max scheme which reduces overfitting via an adversarial objective and thus optimizes for a more generalizable surrogate model. Our proposed attack is complimentary to the adversarial pixel restoration and is independent of any task specific objective as it can be launched in a self-supervised manner. We successfully demonstrate the adversarial transferability of our approach to Vision Transformers as well as Convolutional Neural Networks for the tasks of classification, object detection, and video segmentation. Our training approach improves the transferability of the baseline unsupervised training method by 16.4% on ImageNet val. set. Our codes & pre-trained surrogate models are available at: https://github.com/HashmatShadab/APR
[ { "created": "Mon, 18 Jul 2022 17:59:58 GMT", "version": "v1" }, { "created": "Mon, 8 Aug 2022 07:52:11 GMT", "version": "v2" }, { "created": "Fri, 14 Oct 2022 08:27:49 GMT", "version": "v3" } ]
2022-10-17
[ [ "Malik", "Hashmat Shadab", "" ], [ "Kunhimon", "Shahina K", "" ], [ "Naseer", "Muzammal", "" ], [ "Khan", "Salman", "" ], [ "Khan", "Fahad Shahbaz", "" ] ]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models. Therefore, these attacks are restricted by the availability of an effective surrogate model. In this work, we relax this assumption and propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch under the condition of no labels and few data samples. Our training approach is based on a min-max scheme which reduces overfitting via an adversarial objective and thus optimizes for a more generalizable surrogate model. Our proposed attack is complimentary to the adversarial pixel restoration and is independent of any task specific objective as it can be launched in a self-supervised manner. We successfully demonstrate the adversarial transferability of our approach to Vision Transformers as well as Convolutional Neural Networks for the tasks of classification, object detection, and video segmentation. Our training approach improves the transferability of the baseline unsupervised training method by 16.4% on ImageNet val. set. Our codes & pre-trained surrogate models are available at: https://github.com/HashmatShadab/APR
2402.13126
Yan Pang
Yan Pang, Yang Zhang, Tianhao Wang
VGMShield: Mitigating Misuse of Video Generative Models
17 pages, 10 figures
null
null
null
cs.CR cs.AI cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid advancement in video generation, people can conveniently utilize video generation models to create videos tailored to their specific desires. Nevertheless, there are also growing concerns about their potential misuse in creating and disseminating false information. In this work, we introduce VGMShield: a set of three straightforward but pioneering mitigations through the lifecycle of fake video generation. We start from \textit{fake video detection} trying to understand whether there is uniqueness in generated videos and whether we can differentiate them from real videos; then, we investigate the \textit{tracing} problem, which maps a fake video back to a model that generates it. Towards these, we propose to leverage pre-trained models that focus on {\it spatial-temporal dynamics} as the backbone to identify inconsistencies in videos. Through experiments on seven state-of-the-art open-source models, we demonstrate that current models still cannot perfectly handle spatial-temporal relationships, and thus, we can accomplish detection and tracing with nearly perfect accuracy. Furthermore, anticipating future generative model improvements, we propose a {\it prevention} method that adds invisible perturbations to images to make the generated videos look unreal. Together with fake video detection and tracing, our multi-faceted set of solutions can effectively mitigate misuse of video generative models.
[ { "created": "Tue, 20 Feb 2024 16:39:23 GMT", "version": "v1" } ]
2024-02-21
[ [ "Pang", "Yan", "" ], [ "Zhang", "Yang", "" ], [ "Wang", "Tianhao", "" ] ]
With the rapid advancement in video generation, people can conveniently utilize video generation models to create videos tailored to their specific desires. Nevertheless, there are also growing concerns about their potential misuse in creating and disseminating false information. In this work, we introduce VGMShield: a set of three straightforward but pioneering mitigations through the lifecycle of fake video generation. We start from \textit{fake video detection} trying to understand whether there is uniqueness in generated videos and whether we can differentiate them from real videos; then, we investigate the \textit{tracing} problem, which maps a fake video back to a model that generates it. Towards these, we propose to leverage pre-trained models that focus on {\it spatial-temporal dynamics} as the backbone to identify inconsistencies in videos. Through experiments on seven state-of-the-art open-source models, we demonstrate that current models still cannot perfectly handle spatial-temporal relationships, and thus, we can accomplish detection and tracing with nearly perfect accuracy. Furthermore, anticipating future generative model improvements, we propose a {\it prevention} method that adds invisible perturbations to images to make the generated videos look unreal. Together with fake video detection and tracing, our multi-faceted set of solutions can effectively mitigate misuse of video generative models.
2407.02304
Bas Van Den Heuvel
Bas van den Heuvel, Farzaneh Derakhshan, Stephanie Balzer
Information Flow Control in Cyclic Process Networks
Extended version of ECOOP24 paper
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Protection of confidential data is an important security consideration of today's applications. Of particular concern is to guard against unintentional leakage to a (malicious) observer, who may interact with the program and draw inference from made observations. Information flow control (IFC) type systems address this concern by statically ruling out such leakage. This paper contributes an IFC type system for message-passing concurrent programs, the computational model of choice for many of today's applications such as cloud computing and IoT applications. Such applications typically either implicitly or explicitly codify protocols according to which message exchange must happen, and to statically ensure protocol safety, behavioral type systems such as session types can be used. This paper marries IFC with session typing and contributes over prior work in the following regards: (1) support of realistic cyclic process networks as opposed to the restriction to tree-shaped networks, (2) more permissive, yet entirely secure, IFC control, exploiting cyclic process networks, and (3) considering deadlocks as another form of side channel, and asserting deadlock-sensitive noninterference (DSNI) for well-typed programs. To prove DSNI, the paper develops a novel logical relation that accounts for cyclic process networks. The logical relation is rooted in linear logic, but drops the tree-topology restriction imposed by prior work.
[ { "created": "Tue, 2 Jul 2024 14:37:17 GMT", "version": "v1" } ]
2024-07-03
[ [ "Heuvel", "Bas van den", "" ], [ "Derakhshan", "Farzaneh", "" ], [ "Balzer", "Stephanie", "" ] ]
Protection of confidential data is an important security consideration of today's applications. Of particular concern is to guard against unintentional leakage to a (malicious) observer, who may interact with the program and draw inference from made observations. Information flow control (IFC) type systems address this concern by statically ruling out such leakage. This paper contributes an IFC type system for message-passing concurrent programs, the computational model of choice for many of today's applications such as cloud computing and IoT applications. Such applications typically either implicitly or explicitly codify protocols according to which message exchange must happen, and to statically ensure protocol safety, behavioral type systems such as session types can be used. This paper marries IFC with session typing and contributes over prior work in the following regards: (1) support of realistic cyclic process networks as opposed to the restriction to tree-shaped networks, (2) more permissive, yet entirely secure, IFC control, exploiting cyclic process networks, and (3) considering deadlocks as another form of side channel, and asserting deadlock-sensitive noninterference (DSNI) for well-typed programs. To prove DSNI, the paper develops a novel logical relation that accounts for cyclic process networks. The logical relation is rooted in linear logic, but drops the tree-topology restriction imposed by prior work.
2105.11628
Guoqing Zhang
Yuhao Chen, Guoqing Zhang, Yujiang Lu, Zhenxing Wang, Yuhui Zheng, Ruili Wang
TIPCB: A Simple but Effective Part-based Convolutional Baseline for Text-based Person Search
27 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-based person search is a sub-task in the field of image retrieval, which aims to retrieve target person images according to a given textual description. The significant feature gap between two modalities makes this task very challenging. Many existing methods attempt to utilize local alignment to address this problem in the fine-grained level. However, most relevant methods introduce additional models or complicated training and evaluation strategies, which are hard to use in realistic scenarios. In order to facilitate the practical application, we propose a simple but effective end-to-end learning framework for text-based person search named TIPCB (i.e., Text-Image Part-based Convolutional Baseline). Firstly, a novel dual-path local alignment network structure is proposed to extract visual and textual local representations, in which images are segmented horizontally and texts are aligned adaptively. Then, we propose a multi-stage cross-modal matching strategy, which eliminates the modality gap from three feature levels, including low level, local level and global level. Extensive experiments are conducted on the widely-used benchmark dataset (CUHK-PEDES) and verify that our method outperforms the state-of-the-art methods by 3.69%, 2.95% and 2.31% in terms of Top-1, Top-5 and Top-10. Our code has been released in https://github.com/OrangeYHChen/TIPCB.
[ { "created": "Tue, 25 May 2021 03:00:21 GMT", "version": "v1" } ]
2021-05-26
[ [ "Chen", "Yuhao", "" ], [ "Zhang", "Guoqing", "" ], [ "Lu", "Yujiang", "" ], [ "Wang", "Zhenxing", "" ], [ "Zheng", "Yuhui", "" ], [ "Wang", "Ruili", "" ] ]
Text-based person search is a sub-task in the field of image retrieval, which aims to retrieve target person images according to a given textual description. The significant feature gap between two modalities makes this task very challenging. Many existing methods attempt to utilize local alignment to address this problem in the fine-grained level. However, most relevant methods introduce additional models or complicated training and evaluation strategies, which are hard to use in realistic scenarios. In order to facilitate the practical application, we propose a simple but effective end-to-end learning framework for text-based person search named TIPCB (i.e., Text-Image Part-based Convolutional Baseline). Firstly, a novel dual-path local alignment network structure is proposed to extract visual and textual local representations, in which images are segmented horizontally and texts are aligned adaptively. Then, we propose a multi-stage cross-modal matching strategy, which eliminates the modality gap from three feature levels, including low level, local level and global level. Extensive experiments are conducted on the widely-used benchmark dataset (CUHK-PEDES) and verify that our method outperforms the state-of-the-art methods by 3.69%, 2.95% and 2.31% in terms of Top-1, Top-5 and Top-10. Our code has been released in https://github.com/OrangeYHChen/TIPCB.
1312.0641
Samet Oymak
Samet Oymak, Christos Thrampoulidis, Babak Hassibi
Simple Bounds for Noisy Linear Inverse Problems with Exact Side Information
13 pages
null
null
null
cs.IT math.IT math.OC math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers the linear inverse problem where we wish to estimate a structured signal $x$ from its corrupted observations. When the problem is ill-posed, it is natural to make use of a convex function $f(\cdot)$ that exploits the structure of the signal. For example, $\ell_1$ norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value $f(x)$ or the $\ell_2$-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the "sparsity" of the signal rather than the ambient dimension of the noise vector. We connect our results to existing works and provide a discussion on the relation of our results to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise.
[ { "created": "Mon, 2 Dec 2013 22:07:05 GMT", "version": "v1" }, { "created": "Thu, 5 Dec 2013 20:58:46 GMT", "version": "v2" } ]
2013-12-06
[ [ "Oymak", "Samet", "" ], [ "Thrampoulidis", "Christos", "" ], [ "Hassibi", "Babak", "" ] ]
This paper considers the linear inverse problem where we wish to estimate a structured signal $x$ from its corrupted observations. When the problem is ill-posed, it is natural to make use of a convex function $f(\cdot)$ that exploits the structure of the signal. For example, $\ell_1$ norm can be used for sparse signals. To carry out the estimation, we consider two well-known convex programs: 1) Second order cone program (SOCP), and, 2) Lasso. Assuming Gaussian measurements, we show that, if precise information about the value $f(x)$ or the $\ell_2$-norm of the noise is available, one can do a particularly good job at estimation. In particular, the reconstruction error becomes proportional to the "sparsity" of the signal rather than the ambient dimension of the noise vector. We connect our results to existing works and provide a discussion on the relation of our results to the standard least-squares problem. Our error bounds are non-asymptotic and sharp, they apply to arbitrary convex functions and do not assume any distribution on the noise.
2310.10330
Guillermo Encinas-Lago
Guillermo Encinas-Lago, Antonio Albanese, Vincenzo Sciancalepore, Marco Di Renzo, Xavier Costa-P\'erez
Unlocking Metasurface Practicality for B5G Networks: AI-assisted RIS Planning
null
null
null
null
cs.NI cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
The advent of reconfigurable intelligent surfaces(RISs) brings along significant improvements for wireless technology on the verge of beyond-fifth-generation networks (B5G).The proven flexibility in influencing the propagation environment opens up the possibility of programmatically altering the wireless channel to the advantage of network designers, enabling the exploitation of higher-frequency bands for superior throughput overcoming the challenging electromagnetic (EM) propagation properties at these frequency bands. However, RISs are not magic bullets. Their employment comes with significant complexity, requiring ad-hoc deployments and management operations to come to fruition. In this paper, we tackle the open problem of bringing RISs to the field, focusing on areas with little or no coverage. In fact, we present a first-of-its-kind deep reinforcement learning (DRL) solution, dubbed as D-RISA, which trains a DRL agent and, in turn, obtain san optimal RIS deployment. We validate our framework in the indoor scenario of the Rennes railway station in France, assessing the performance of our algorithm against state-of-the-art (SOA) approaches. Our benchmarks showcase better coverage, i.e., 10-dB increase in minimum signal-to-noise ratio (SNR), at lower computational time (up to -25 percent) while improving scalability towards denser network deployments.
[ { "created": "Mon, 16 Oct 2023 12:14:42 GMT", "version": "v1" } ]
2023-10-17
[ [ "Encinas-Lago", "Guillermo", "" ], [ "Albanese", "Antonio", "" ], [ "Sciancalepore", "Vincenzo", "" ], [ "Di Renzo", "Marco", "" ], [ "Costa-Pérez", "Xavier", "" ] ]
The advent of reconfigurable intelligent surfaces(RISs) brings along significant improvements for wireless technology on the verge of beyond-fifth-generation networks (B5G).The proven flexibility in influencing the propagation environment opens up the possibility of programmatically altering the wireless channel to the advantage of network designers, enabling the exploitation of higher-frequency bands for superior throughput overcoming the challenging electromagnetic (EM) propagation properties at these frequency bands. However, RISs are not magic bullets. Their employment comes with significant complexity, requiring ad-hoc deployments and management operations to come to fruition. In this paper, we tackle the open problem of bringing RISs to the field, focusing on areas with little or no coverage. In fact, we present a first-of-its-kind deep reinforcement learning (DRL) solution, dubbed as D-RISA, which trains a DRL agent and, in turn, obtain san optimal RIS deployment. We validate our framework in the indoor scenario of the Rennes railway station in France, assessing the performance of our algorithm against state-of-the-art (SOA) approaches. Our benchmarks showcase better coverage, i.e., 10-dB increase in minimum signal-to-noise ratio (SNR), at lower computational time (up to -25 percent) while improving scalability towards denser network deployments.
2212.03371
Shiqing Wu
Guan Wang, Weihua Li, Edmund Lai, Jianhua Jiang
KATSum: Knowledge-aware Abstractive Text Summarization
Presented at PKAW 2022 (arXiv:2211.03888) Report-no: PKAW/2022/02
null
null
Report-no: PKAW/2022/02
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.
[ { "created": "Tue, 6 Dec 2022 23:43:50 GMT", "version": "v1" } ]
2022-12-08
[ [ "Wang", "Guan", "" ], [ "Li", "Weihua", "" ], [ "Lai", "Edmund", "" ], [ "Jiang", "Jianhua", "" ] ]
Text Summarization is recognised as one of the NLP downstream tasks and it has been extensively investigated in recent years. It can assist people with perceiving the information rapidly from the Internet, including news articles, social posts, videos, etc. Most existing research works attempt to develop summarization models to produce a better output. However, advent limitations of most existing models emerge, including unfaithfulness and factual errors. In this paper, we propose a novel model, named as Knowledge-aware Abstractive Text Summarization, which leverages the advantages offered by Knowledge Graph to enhance the standard Seq2Seq model. On top of that, the Knowledge Graph triplets are extracted from the source text and utilised to provide keywords with relational information, producing coherent and factually errorless summaries. We conduct extensive experiments by using real-world data sets. The results reveal that the proposed framework can effectively utilise the information from Knowledge Graph and significantly reduce the factual errors in the summary.
2101.04690
Matthias Frey
Matthias Frey, Igor Bjelakovic, Slawomir Stanczak
Over-The-Air Computation in Correlated Channels
Extended version can be found at arXiv:2007.02648
2020 IEEE Information Theory Workshop (ITW), Riva del Garda, Italy, 11-15 April, 2021
10.1109/TSP.2021.3106115
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of Over-The-Air (OTA) computation in wireless networks which has the potential to realize huge efficiency gains for instance in training of distributed ML models. We provide non-asymptotic, theoretical guarantees for OTA computation in fast-fading wireless channels where the fading and noise may be correlated. The distributions of fading and noise are not restricted to Gaussian distributions, but instead are assumed to follow a distribution in the more general sub-gaussian class. Furthermore, our result does not make any assumptions on the distribution of the sources and therefore, it can, e.g., be applied to arbitrarily correlated sources. We illustrate our analysis with numerical evaluations for OTA computation of two example functions in large wireless networks: the arithmetic mean and the Euclidean norm.
[ { "created": "Tue, 12 Jan 2021 19:00:02 GMT", "version": "v1" } ]
2021-12-01
[ [ "Frey", "Matthias", "" ], [ "Bjelakovic", "Igor", "" ], [ "Stanczak", "Slawomir", "" ] ]
This paper addresses the problem of Over-The-Air (OTA) computation in wireless networks which has the potential to realize huge efficiency gains for instance in training of distributed ML models. We provide non-asymptotic, theoretical guarantees for OTA computation in fast-fading wireless channels where the fading and noise may be correlated. The distributions of fading and noise are not restricted to Gaussian distributions, but instead are assumed to follow a distribution in the more general sub-gaussian class. Furthermore, our result does not make any assumptions on the distribution of the sources and therefore, it can, e.g., be applied to arbitrarily correlated sources. We illustrate our analysis with numerical evaluations for OTA computation of two example functions in large wireless networks: the arithmetic mean and the Euclidean norm.