id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2101.06535
Emiliano De Cristofaro
Chen Ling, Ihab AbuHilal, Jeremy Blackburn, Emiliano De Cristofaro, Savvas Zannettou, and Gianluca Stringhini
Dissecting the Meme Magic: Understanding Indicators of Virality in Image Memes
To appear at the 24th ACM Conference on Computer-Supported Coop- erative Work and Social Computing (CSCW 2021)
null
null
null
cs.HC cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
[ { "created": "Sat, 16 Jan 2021 22:36:51 GMT", "version": "v1" } ]
2021-01-19
[ [ "Ling", "Chen", "" ], [ "AbuHilal", "Ihab", "" ], [ "Blackburn", "Jeremy", "" ], [ "De Cristofaro", "Emiliano", "" ], [ "Zannettou", "Savvas", "" ], [ "Stringhini", "Gianluca", "" ] ]
Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.
1707.04875
Daniel Hsu
Alexandr Andoni and Javad Ghaderi and Daniel Hsu and Dan Rubenstein and Omri Weinstein
Coding sets with asymmetric information
null
null
null
null
cs.DS cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the following one-way asymmetric transmission problem, also a variant of model-based compressed sensing: a resource-limited encoder has to report a small set $S$ from a universe of $N$ items to a more powerful decoder (server). The distinguishing feature is asymmetric information: the subset $S$ is comprised of i.i.d. samples from a prior distribution $\mu$, and $\mu$ is only known to the decoder. The goal for the encoder is to encode $S$ obliviously, while achieving the information-theoretic bound of $|S| \cdot H(\mu)$, i.e., the Shannon entropy bound. We first show that any such compression scheme must be {\em randomized}, if it gains non-trivially from the prior $\mu$. This stands in contrast to the symmetric case (when both the encoder and decoder know $\mu$), where the Huffman code provides a near-optimal deterministic solution. On the other hand, a rather simple argument shows that, when $|S|=k$, a random linear code achieves near-optimal communication rate of about $k\cdot H(\mu)$ bits. Alas, the resulting scheme has prohibitive decoding time: about ${N\choose k} \approx (N/k)^k$. Our main result is a computationally efficient and linear coding scheme, which achieves an $O(\lg\lg N)$-competitive communication ratio compared to the optimal benchmark, and runs in $\text{poly}(N,k)$ time. Our "multi-level" coding scheme uses a combination of hashing and syndrome-decoding of Reed-Solomon codes, and relies on viewing the (unknown) prior $\mu$ as a rather small convex combination of uniform ("flat") distributions.
[ { "created": "Sun, 16 Jul 2017 12:51:42 GMT", "version": "v1" }, { "created": "Fri, 27 Jul 2018 01:15:33 GMT", "version": "v2" } ]
2018-07-30
[ [ "Andoni", "Alexandr", "" ], [ "Ghaderi", "Javad", "" ], [ "Hsu", "Daniel", "" ], [ "Rubenstein", "Dan", "" ], [ "Weinstein", "Omri", "" ] ]
We study the following one-way asymmetric transmission problem, also a variant of model-based compressed sensing: a resource-limited encoder has to report a small set $S$ from a universe of $N$ items to a more powerful decoder (server). The distinguishing feature is asymmetric information: the subset $S$ is comprised of i.i.d. samples from a prior distribution $\mu$, and $\mu$ is only known to the decoder. The goal for the encoder is to encode $S$ obliviously, while achieving the information-theoretic bound of $|S| \cdot H(\mu)$, i.e., the Shannon entropy bound. We first show that any such compression scheme must be {\em randomized}, if it gains non-trivially from the prior $\mu$. This stands in contrast to the symmetric case (when both the encoder and decoder know $\mu$), where the Huffman code provides a near-optimal deterministic solution. On the other hand, a rather simple argument shows that, when $|S|=k$, a random linear code achieves near-optimal communication rate of about $k\cdot H(\mu)$ bits. Alas, the resulting scheme has prohibitive decoding time: about ${N\choose k} \approx (N/k)^k$. Our main result is a computationally efficient and linear coding scheme, which achieves an $O(\lg\lg N)$-competitive communication ratio compared to the optimal benchmark, and runs in $\text{poly}(N,k)$ time. Our "multi-level" coding scheme uses a combination of hashing and syndrome-decoding of Reed-Solomon codes, and relies on viewing the (unknown) prior $\mu$ as a rather small convex combination of uniform ("flat") distributions.
2405.08209
Rachel Hong
Rachel Hong, William Agnew, Tadayoshi Kohno, and Jamie Morgenstern
Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp
Content warning: This paper discusses societal stereotypes and sexually-explicit material that may be disturbing, distressing, and/or offensive to the reader
null
null
null
cs.CY cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As training datasets become increasingly drawn from unstructured, uncontrolled environments such as the web, researchers and industry practitioners have increasingly relied upon data filtering techniques to "filter out the noise" of web-scraped data. While datasets have been widely shown to reflect the biases and values of their creators, in this paper we contribute to an emerging body of research that assesses the filters used to create these datasets. We show that image-text data filtering also has biases and is value-laden, encoding specific notions of what is counted as "high-quality" data. In our work, we audit a standard approach of image-text CLIP-filtering on the academic benchmark DataComp's CommonPool by analyzing discrepancies of filtering through various annotation techniques across multiple modalities of image, text, and website source. We find that data relating to several imputed demographic groups -- such as LGBTQ+ people, older women, and younger men -- are associated with higher rates of exclusion. Moreover, we demonstrate cases of exclusion amplification: not only are certain marginalized groups already underrepresented in the unfiltered data, but CLIP-filtering excludes data from these groups at higher rates. The data-filtering step in the machine learning pipeline can therefore exacerbate representation disparities already present in the data-gathering step, especially when existing filters are designed to optimize a specifically-chosen downstream performance metric like zero-shot image classification accuracy. Finally, we show that the NSFW filter fails to remove sexually-explicit content from CommonPool, and that CLIP-filtering includes several categories of copyrighted content at high rates. Our conclusions point to a need for fundamental changes in dataset creation and filtering practices.
[ { "created": "Mon, 13 May 2024 21:53:06 GMT", "version": "v1" } ]
2024-05-15
[ [ "Hong", "Rachel", "" ], [ "Agnew", "William", "" ], [ "Kohno", "Tadayoshi", "" ], [ "Morgenstern", "Jamie", "" ] ]
As training datasets become increasingly drawn from unstructured, uncontrolled environments such as the web, researchers and industry practitioners have increasingly relied upon data filtering techniques to "filter out the noise" of web-scraped data. While datasets have been widely shown to reflect the biases and values of their creators, in this paper we contribute to an emerging body of research that assesses the filters used to create these datasets. We show that image-text data filtering also has biases and is value-laden, encoding specific notions of what is counted as "high-quality" data. In our work, we audit a standard approach of image-text CLIP-filtering on the academic benchmark DataComp's CommonPool by analyzing discrepancies of filtering through various annotation techniques across multiple modalities of image, text, and website source. We find that data relating to several imputed demographic groups -- such as LGBTQ+ people, older women, and younger men -- are associated with higher rates of exclusion. Moreover, we demonstrate cases of exclusion amplification: not only are certain marginalized groups already underrepresented in the unfiltered data, but CLIP-filtering excludes data from these groups at higher rates. The data-filtering step in the machine learning pipeline can therefore exacerbate representation disparities already present in the data-gathering step, especially when existing filters are designed to optimize a specifically-chosen downstream performance metric like zero-shot image classification accuracy. Finally, we show that the NSFW filter fails to remove sexually-explicit content from CommonPool, and that CLIP-filtering includes several categories of copyrighted content at high rates. Our conclusions point to a need for fundamental changes in dataset creation and filtering practices.
1806.09849
EPTCS
Mahmoud Khaled (Technical University of Munich, Munich, Germany), Matthias Rungger (Technical University of Munich, Munich, Germany), Majid Zamani (Technical University of Munich, Munich, Germany)
SENSE: Abstraction-Based Synthesis of Networked Control Systems
In Proceedings MeTRiD 2018, arXiv:1806.09330
EPTCS 272, 2018, pp. 65-78
10.4204/EPTCS.272.6
null
cs.SY cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While many studies and tools target the basic stabilizability problem of networked control systems (NCS), nowadays modern systems require more sophisticated objectives such as those expressed as formulae in linear temporal logic or as automata on infinite strings. One general technique to achieve this is based on so-called symbolic models, where complex systems are approximated by finite abstractions, and then, correct-by-construction controllers are automatically synthesized for them. We present tool SENSE for the construction of finite abstractions for NCS and the automated synthesis of controllers. Constructed controllers enforce complex specifications over plants in NCS by taking into account several non-idealities of the communication channels. Given a symbolic model of the plant and network parameters, SENSE can efficiently construct a symbolic model of the NCS, by employing operations on binary decision diagrams (BDDs). Then, it synthesizes symbolic controllers satisfying a class of specifications. It has interfaces for the simulation and the visualization of the resulting closed-loop systems using OMNETPP and MATLAB. Additionally, SENSE can generate ready-to-implement VHDL/Verilog or C/C++ codes from the synthesized controllers.
[ { "created": "Tue, 26 Jun 2018 08:53:44 GMT", "version": "v1" } ]
2018-06-27
[ [ "Khaled", "Mahmoud", "", "Technical University of Munich, Munich, Germany" ], [ "Rungger", "Matthias", "", "Technical University of Munich, Munich, Germany" ], [ "Zamani", "Majid", "", "Technical University of Munich, Munich, Germany" ] ]
While many studies and tools target the basic stabilizability problem of networked control systems (NCS), nowadays modern systems require more sophisticated objectives such as those expressed as formulae in linear temporal logic or as automata on infinite strings. One general technique to achieve this is based on so-called symbolic models, where complex systems are approximated by finite abstractions, and then, correct-by-construction controllers are automatically synthesized for them. We present tool SENSE for the construction of finite abstractions for NCS and the automated synthesis of controllers. Constructed controllers enforce complex specifications over plants in NCS by taking into account several non-idealities of the communication channels. Given a symbolic model of the plant and network parameters, SENSE can efficiently construct a symbolic model of the NCS, by employing operations on binary decision diagrams (BDDs). Then, it synthesizes symbolic controllers satisfying a class of specifications. It has interfaces for the simulation and the visualization of the resulting closed-loop systems using OMNETPP and MATLAB. Additionally, SENSE can generate ready-to-implement VHDL/Verilog or C/C++ codes from the synthesized controllers.
2210.13522
Anjali Narayan-Chen
Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng
Context-Situated Pun Generation
Accepted to EMNLP 2022 main conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous work on pun generation commonly begins with a given pun word (a pair of homophones for heterographic pun generation and a polyseme for homographic pun generation) and seeks to generate an appropriate pun. While this may enable efficient pun generation, we believe that a pun is most entertaining if it fits appropriately within a given context, e.g., a given situation or dialogue. In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words. We collect CUP (Context-sitUated Pun), containing 4.5k tuples of context words and pun pairs. Based on the new data and setup, we propose a pipeline system for context-situated pun generation, including a pun word retrieval module that identifies suitable pun words for a given context, and a generation module that generates puns from context keywords and pun words. Human evaluation shows that 69% of our top retrieved pun words can be used to generate context-situated puns, and our generation module yields successful puns 31% of the time given a plausible tuple of context words and pun pair, almost tripling the yield of a state-of-the-art pun generation model. With an end-to-end evaluation, our pipeline system with the top-1 retrieved pun pair for a given context can generate successful puns 40% of the time, better than all other modeling variations but 32% lower than the human success rate. This highlights the difficulty of the task, and encourages more research in this direction.
[ { "created": "Mon, 24 Oct 2022 18:24:48 GMT", "version": "v1" } ]
2022-10-26
[ [ "Sun", "Jiao", "" ], [ "Narayan-Chen", "Anjali", "" ], [ "Oraby", "Shereen", "" ], [ "Gao", "Shuyang", "" ], [ "Chung", "Tagyoung", "" ], [ "Huang", "Jing", "" ], [ "Liu", "Yang", "" ], [ "Peng", "Nanyun", "" ] ]
Previous work on pun generation commonly begins with a given pun word (a pair of homophones for heterographic pun generation and a polyseme for homographic pun generation) and seeks to generate an appropriate pun. While this may enable efficient pun generation, we believe that a pun is most entertaining if it fits appropriately within a given context, e.g., a given situation or dialogue. In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words. We collect CUP (Context-sitUated Pun), containing 4.5k tuples of context words and pun pairs. Based on the new data and setup, we propose a pipeline system for context-situated pun generation, including a pun word retrieval module that identifies suitable pun words for a given context, and a generation module that generates puns from context keywords and pun words. Human evaluation shows that 69% of our top retrieved pun words can be used to generate context-situated puns, and our generation module yields successful puns 31% of the time given a plausible tuple of context words and pun pair, almost tripling the yield of a state-of-the-art pun generation model. With an end-to-end evaluation, our pipeline system with the top-1 retrieved pun pair for a given context can generate successful puns 40% of the time, better than all other modeling variations but 32% lower than the human success rate. This highlights the difficulty of the task, and encourages more research in this direction.
1410.4603
Hamzah Asyrani Sulaiman
Hamzah Asyrani Sulaiman, Abdullah Bade and Mohd Harun Abdullah
Efficient Distance Computation Algorithm between Nearly Intersected Objects Using Dynamic Pivot Point in Virtual Environment Application
6 pages
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding nearly accurate distance between two or more nearly intersecting three-dimensional (3D) objects is vital especially for collision determination such as in virtual surgeon simulation and real-time car crash simulation. Instead of performing broad phase collision detection, we need to check for accuracy of detection by running narrow phase collision detection. One of the important elements for narrow phase collision detection is to determine the precise distance between two or more nearly intersecting objects or polygons in order to prepare the area for potential colliding. Distance computation plays important roles in determine the exact point of contact between two or more nearly intersecting polygons where the preparation for collision detection is determined at the earlier stage. In this paper, we describes our current works of determining the distance between objects using dynamic pivot point that will be used as reference point to reduce the complexity searching for potential point of contacts. By using Axis-Aligned Bounding Box for each polygon, we calculate a dynamic pivot point that will become our reference point to determine the potential candidates for distance computation. The test our finding distance will be simplified by using our method instead of performing unneeded operations. Our method provides faster solution than the previous method where it helps to determine the point of contact efficiently and faster than the other method.
[ { "created": "Thu, 16 Oct 2014 23:08:04 GMT", "version": "v1" } ]
2014-10-20
[ [ "Sulaiman", "Hamzah Asyrani", "" ], [ "Bade", "Abdullah", "" ], [ "Abdullah", "Mohd Harun", "" ] ]
Finding nearly accurate distance between two or more nearly intersecting three-dimensional (3D) objects is vital especially for collision determination such as in virtual surgeon simulation and real-time car crash simulation. Instead of performing broad phase collision detection, we need to check for accuracy of detection by running narrow phase collision detection. One of the important elements for narrow phase collision detection is to determine the precise distance between two or more nearly intersecting objects or polygons in order to prepare the area for potential colliding. Distance computation plays important roles in determine the exact point of contact between two or more nearly intersecting polygons where the preparation for collision detection is determined at the earlier stage. In this paper, we describes our current works of determining the distance between objects using dynamic pivot point that will be used as reference point to reduce the complexity searching for potential point of contacts. By using Axis-Aligned Bounding Box for each polygon, we calculate a dynamic pivot point that will become our reference point to determine the potential candidates for distance computation. The test our finding distance will be simplified by using our method instead of performing unneeded operations. Our method provides faster solution than the previous method where it helps to determine the point of contact efficiently and faster than the other method.
2006.08444
Anas AbuDaqa
Anas AbuDaqa, Amjad Abu-Hassan, Muhammad Imam
Taxonomy and Practical Evaluation of Primality Testing Algorithms
20 pages, 16 figures
null
null
null
cs.CR math.NT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern cryptography algorithms are commonly used to ensure information security. Prime numbers are needed in many asymmetric cryptography algorithms. For example, RSA algorithm selects two large prime numbers and multiplies to each other to obtain a large composite number whose factorization is very difficult. Producing a prime number is not an easy task as they are not distributed regularly through integers. Primality testing algorithms are used to determine whether a particular number is prime or composite. In this paper, an intensive survey is thoroughly conducted among the several primality testing algorithms showing the pros and cons, the time complexity, and a brief summary of each algorithm. Besides, an implementation of these algorithms is accomplished using Java and Python as programming languages to evaluate the efficiency of both the algorithms and the programming languages.
[ { "created": "Mon, 15 Jun 2020 14:40:50 GMT", "version": "v1" } ]
2020-06-16
[ [ "AbuDaqa", "Anas", "" ], [ "Abu-Hassan", "Amjad", "" ], [ "Imam", "Muhammad", "" ] ]
Modern cryptography algorithms are commonly used to ensure information security. Prime numbers are needed in many asymmetric cryptography algorithms. For example, RSA algorithm selects two large prime numbers and multiplies to each other to obtain a large composite number whose factorization is very difficult. Producing a prime number is not an easy task as they are not distributed regularly through integers. Primality testing algorithms are used to determine whether a particular number is prime or composite. In this paper, an intensive survey is thoroughly conducted among the several primality testing algorithms showing the pros and cons, the time complexity, and a brief summary of each algorithm. Besides, an implementation of these algorithms is accomplished using Java and Python as programming languages to evaluate the efficiency of both the algorithms and the programming languages.
2109.04743
Juan David Munoz-Osorio JMz
Juan David Munoz-Osorio and Felix Allmendinger
A Suitable Hierarchical Framework with Arbitrary Task Dimensions under Unilateral Constraints for physical Human Robot Interaction
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the last years, several hierarchical frameworks have been proposed to deal with highly-redundant robotic systems. Some of that systems are expected to perform multiple tasks and physically to interact with the environment. However, none of the proposed frameworks is able to manage multiple tasks with arbitrary task dimensions, while respecting unilateral constraints at position, velocity, acceleration and force level, and at the same time, to react intuitively to external forces. This work proposes a framework that addresses this problem. The framework is tested in simulation and on a real robot. The experiments on the redundant collaborative industrial robot (KUKA LBR iiwa) demonstrate the advantage of the framework compared to state-of-the-art approaches. The framework reacts intuitively to external forces and is able to limit joint positions, velocities, accelerations and forces.
[ { "created": "Fri, 10 Sep 2021 09:09:37 GMT", "version": "v1" }, { "created": "Tue, 5 Apr 2022 11:55:31 GMT", "version": "v2" } ]
2022-04-06
[ [ "Munoz-Osorio", "Juan David", "" ], [ "Allmendinger", "Felix", "" ] ]
In the last years, several hierarchical frameworks have been proposed to deal with highly-redundant robotic systems. Some of that systems are expected to perform multiple tasks and physically to interact with the environment. However, none of the proposed frameworks is able to manage multiple tasks with arbitrary task dimensions, while respecting unilateral constraints at position, velocity, acceleration and force level, and at the same time, to react intuitively to external forces. This work proposes a framework that addresses this problem. The framework is tested in simulation and on a real robot. The experiments on the redundant collaborative industrial robot (KUKA LBR iiwa) demonstrate the advantage of the framework compared to state-of-the-art approaches. The framework reacts intuitively to external forces and is able to limit joint positions, velocities, accelerations and forces.
2103.11773
Fan Cheng
Fan Cheng, Anastasios Panagiotelis, Rob J Hyndman
Computationally Efficient Learning of Statistical Manifolds
29 pages, 10 figures
null
null
null
cs.LG stat.AP stat.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Analyzing high-dimensional data with manifold learning algorithms often requires searching for the nearest neighbors of all observations. This presents a computational bottleneck in statistical manifold learning when observations of probability distributions rather than vector-valued variables are available or when data size is large. We resolve this problem by proposing a new method for approximation in statistical manifold learning. The novelty of our approximation is the strongly consistent distance estimators based on independent and identically distributed samples from probability distributions. By exploiting the connection between Hellinger/total variation distance for discrete distributions and the L2/L1 norm, we demonstrate that the proposed distance estimators, combined with approximate nearest neighbor searching, could largely improve the computational efficiency with little to no loss in the accuracy of manifold embedding. The result is robust to different manifold learning algorithms and different approximate nearest neighbor algorithms. The proposed method is applied to learning statistical manifolds of electricity usage. This application demonstrates how underlying structures in high dimensional data, including anomalies, can be visualized and identified, in a way that is scalable to large datasets.
[ { "created": "Mon, 22 Feb 2021 12:04:23 GMT", "version": "v1" }, { "created": "Thu, 10 Mar 2022 03:34:08 GMT", "version": "v2" } ]
2022-03-11
[ [ "Cheng", "Fan", "" ], [ "Panagiotelis", "Anastasios", "" ], [ "Hyndman", "Rob J", "" ] ]
Analyzing high-dimensional data with manifold learning algorithms often requires searching for the nearest neighbors of all observations. This presents a computational bottleneck in statistical manifold learning when observations of probability distributions rather than vector-valued variables are available or when data size is large. We resolve this problem by proposing a new method for approximation in statistical manifold learning. The novelty of our approximation is the strongly consistent distance estimators based on independent and identically distributed samples from probability distributions. By exploiting the connection between Hellinger/total variation distance for discrete distributions and the L2/L1 norm, we demonstrate that the proposed distance estimators, combined with approximate nearest neighbor searching, could largely improve the computational efficiency with little to no loss in the accuracy of manifold embedding. The result is robust to different manifold learning algorithms and different approximate nearest neighbor algorithms. The proposed method is applied to learning statistical manifolds of electricity usage. This application demonstrates how underlying structures in high dimensional data, including anomalies, can be visualized and identified, in a way that is scalable to large datasets.
1711.01871
Janne H. Korhonen
Alkida Balliu and Juho Hirvonen and Janne H. Korhonen and Tuomo Lempi\"ainen and Dennis Olivetti and Jukka Suomela
New Classes of Distributed Time Complexity
null
null
null
null
cs.DC cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of recent papers -- e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) -- have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem $\Pi$ in which a solution can be verified by checking all radius-$O(1)$ neighbourhoods, and the question is what is the smallest $T$ such that a solution can be computed so that each node chooses its own output based on its radius-$T$ neighbourhood. Here $T$ is the distributed time complexity of $\Pi$. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are $\Theta(1)$, $\Theta(\log^* n)$, $\Theta(\log n)$, $\Theta(n^{1/k})$, and $\Theta(n)$. It is also known that there are two gaps: one between $\omega(1)$ and $o(\log \log^* n)$, and another between $\omega(\log^* n)$ and $o(\log n)$. It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple -- indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including $\Theta(\log^{\alpha}n)$ for any $\alpha\ge1$, $2^{\Theta(\log^{\alpha}n)}$ for any $\alpha\le 1$, and $\Theta(n^{\alpha})$ for any $\alpha <1/2$ in the high end of the complexity spectrum, and $\Theta(\log^{\alpha}\log^* n)$ for any $\alpha\ge 1$, $\smash{2^{\Theta(\log^{\alpha}\log^* n)}}$ for any $\alpha\le 1$, and $\Theta((\log^* n)^{\alpha})$ for any $\alpha \le 1$ in the low end; here $\alpha$ is a positive rational number.
[ { "created": "Mon, 6 Nov 2017 13:05:30 GMT", "version": "v1" }, { "created": "Thu, 5 Apr 2018 12:21:26 GMT", "version": "v2" } ]
2018-04-06
[ [ "Balliu", "Alkida", "" ], [ "Hirvonen", "Juho", "" ], [ "Korhonen", "Janne H.", "" ], [ "Lempiäinen", "Tuomo", "" ], [ "Olivetti", "Dennis", "" ], [ "Suomela", "Jukka", "" ] ]
A number of recent papers -- e.g. Brandt et al. (STOC 2016), Chang et al. (FOCS 2016), Ghaffari & Su (SODA 2017), Brandt et al. (PODC 2017), and Chang & Pettie (FOCS 2017) -- have advanced our understanding of one of the most fundamental questions in theory of distributed computing: what are the possible time complexity classes of LCL problems in the LOCAL model? In essence, we have a graph problem $\Pi$ in which a solution can be verified by checking all radius-$O(1)$ neighbourhoods, and the question is what is the smallest $T$ such that a solution can be computed so that each node chooses its own output based on its radius-$T$ neighbourhood. Here $T$ is the distributed time complexity of $\Pi$. The time complexity classes for deterministic algorithms in bounded-degree graphs that are known to exist by prior work are $\Theta(1)$, $\Theta(\log^* n)$, $\Theta(\log n)$, $\Theta(n^{1/k})$, and $\Theta(n)$. It is also known that there are two gaps: one between $\omega(1)$ and $o(\log \log^* n)$, and another between $\omega(\log^* n)$ and $o(\log n)$. It has been conjectured that many more gaps exist, and that the overall time hierarchy is relatively simple -- indeed, this is known to be the case in restricted graph families such as cycles and grids. We show that the picture is much more diverse than previously expected. We present a general technique for engineering LCL problems with numerous different deterministic time complexities, including $\Theta(\log^{\alpha}n)$ for any $\alpha\ge1$, $2^{\Theta(\log^{\alpha}n)}$ for any $\alpha\le 1$, and $\Theta(n^{\alpha})$ for any $\alpha <1/2$ in the high end of the complexity spectrum, and $\Theta(\log^{\alpha}\log^* n)$ for any $\alpha\ge 1$, $\smash{2^{\Theta(\log^{\alpha}\log^* n)}}$ for any $\alpha\le 1$, and $\Theta((\log^* n)^{\alpha})$ for any $\alpha \le 1$ in the low end; here $\alpha$ is a positive rational number.
1707.00075
Alex Beutel
Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017)
null
null
null
cs.LG cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
[ { "created": "Sat, 1 Jul 2017 01:09:33 GMT", "version": "v1" }, { "created": "Fri, 7 Jul 2017 01:31:36 GMT", "version": "v2" } ]
2017-07-10
[ [ "Beutel", "Alex", "" ], [ "Chen", "Jilin", "" ], [ "Zhao", "Zhe", "" ], [ "Chi", "Ed H.", "" ] ]
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
2304.09344
Andrew Su
Jackson Callaghan, Colleen H. Xu, Jiwen Xin, Marco Alvarado Cano, Anders Riutta, Eric Zhou, Rohan Juneja, Yao Yao, Madhumita Narayan, Kristina Hanspers, Ayushi Agrawal, Alexander R. Pico, Chunlei Wu, Andrew I. Su
BioThings Explorer: a query engine for a federated knowledge graph of biomedical APIs
null
null
null
null
cs.DB q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Knowledge graphs are an increasingly common data structure for representing biomedical information. These knowledge graphs can easily represent heterogeneous types of information, and many algorithms and tools exist for querying and analyzing graphs. Biomedical knowledge graphs have been used in a variety of applications, including drug repurposing, identification of drug targets, prediction of drug side effects, and clinical decision support. Typically, knowledge graphs are constructed by centralization and integration of data from multiple disparate sources. Here, we describe BioThings Explorer, an application that can query a virtual, federated knowledge graph derived from the aggregated information in a network of biomedical web services. BioThings Explorer leverages semantically precise annotations of the inputs and outputs for each resource, and automates the chaining of web service calls to execute multi-step graph queries. Because there is no large, centralized knowledge graph to maintain, BioThing Explorer is distributed as a lightweight application that dynamically retrieves information at query time. More information can be found at https://explorer.biothings.io, and code is available at https://github.com/biothings/biothings_explorer.
[ { "created": "Tue, 18 Apr 2023 23:44:07 GMT", "version": "v1" } ]
2023-04-20
[ [ "Callaghan", "Jackson", "" ], [ "Xu", "Colleen H.", "" ], [ "Xin", "Jiwen", "" ], [ "Cano", "Marco Alvarado", "" ], [ "Riutta", "Anders", "" ], [ "Zhou", "Eric", "" ], [ "Juneja", "Rohan", "" ], [ "Yao", "Yao", "" ], [ "Narayan", "Madhumita", "" ], [ "Hanspers", "Kristina", "" ], [ "Agrawal", "Ayushi", "" ], [ "Pico", "Alexander R.", "" ], [ "Wu", "Chunlei", "" ], [ "Su", "Andrew I.", "" ] ]
Knowledge graphs are an increasingly common data structure for representing biomedical information. These knowledge graphs can easily represent heterogeneous types of information, and many algorithms and tools exist for querying and analyzing graphs. Biomedical knowledge graphs have been used in a variety of applications, including drug repurposing, identification of drug targets, prediction of drug side effects, and clinical decision support. Typically, knowledge graphs are constructed by centralization and integration of data from multiple disparate sources. Here, we describe BioThings Explorer, an application that can query a virtual, federated knowledge graph derived from the aggregated information in a network of biomedical web services. BioThings Explorer leverages semantically precise annotations of the inputs and outputs for each resource, and automates the chaining of web service calls to execute multi-step graph queries. Because there is no large, centralized knowledge graph to maintain, BioThing Explorer is distributed as a lightweight application that dynamically retrieves information at query time. More information can be found at https://explorer.biothings.io, and code is available at https://github.com/biothings/biothings_explorer.
1509.08205
Niharika Sachdeva
Niharika Sachdeva and Ponnurangam Kumaraguru
Characterising Behavior and Emotions on Social Media for Safety: Exploring Online Communication between Police and Citizens
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increased use of social media by police to connect with citizens has encouraged researchers to study different aspects of information exchange (e.g. type of information, credibility and propagation) during emergency and crisis situation. Research studies lack understanding of human behavior such as engagement, emotions and social interaction between citizen and police department on social media. Several social media studies explore and show technological implications of human behavioral aspects in various contexts such as workplace interaction and depression in young mothers. In this paper, we study online interactions between citizens and Indian police in context of day-to-day policing, including safety concerns, advisories, etc. Indian police departments use Facebook to issue advisories, send alerts and receive citizen complaints and suggestions regarding safety issues and day-to-day policing. We explore how citizens express their emotions and social support on Facebook. Our work discusses technological implications of behavioral aspects on social well being of citizens.
[ { "created": "Mon, 28 Sep 2015 06:03:41 GMT", "version": "v1" } ]
2015-09-29
[ [ "Sachdeva", "Niharika", "" ], [ "Kumaraguru", "Ponnurangam", "" ] ]
Increased use of social media by police to connect with citizens has encouraged researchers to study different aspects of information exchange (e.g. type of information, credibility and propagation) during emergency and crisis situation. Research studies lack understanding of human behavior such as engagement, emotions and social interaction between citizen and police department on social media. Several social media studies explore and show technological implications of human behavioral aspects in various contexts such as workplace interaction and depression in young mothers. In this paper, we study online interactions between citizens and Indian police in context of day-to-day policing, including safety concerns, advisories, etc. Indian police departments use Facebook to issue advisories, send alerts and receive citizen complaints and suggestions regarding safety issues and day-to-day policing. We explore how citizens express their emotions and social support on Facebook. Our work discusses technological implications of behavioral aspects on social well being of citizens.
0905.0740
Ignacio Vega-Paez M en C
Gerardo Cisneros
A FORTRAN coded regular expression Compiler for IBM 1130 Computing System
This version of REC is archaeological reconstruction of REC/A language on IBM1130 Simulator (SIMH IBM 1130 Emulator and Disk Monitor System R2V12) from Computer History Simulation Project (www.ibm1130.org), also see REC language is a live for Ignacio Vega-Paez
Acta Mexicana de Ciencia y Tecnologia Vol. IV No. 1, page 30-86, 1970
null
IBP-Memo 2008-12
cs.CL cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
REC (Regular Expression Compiler) is a concise programming language which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes an interpreter of REC written in FORTRAN.
[ { "created": "Wed, 6 May 2009 04:29:51 GMT", "version": "v1" } ]
2011-07-12
[ [ "Cisneros", "Gerardo", "" ] ]
REC (Regular Expression Compiler) is a concise programming language which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes an interpreter of REC written in FORTRAN.
1502.04244
Nian Li
Maosheng Xiong and Nian Li
Optimal cyclic codes with generalized Niho type zeroes and the weight distribution
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we extend the works \cite{gegeng2,XLZD} further in two directions and compute the weight distribution of these cyclic codes under more relaxed conditions. It is interesting to note that many cyclic codes in the family are optimal and have only a few non-zero weights. Besides using similar ideas from \cite{gegeng2,XLZD}, we carry out some subtle manipulation of certain exponential sums.
[ { "created": "Sat, 14 Feb 2015 21:15:43 GMT", "version": "v1" } ]
2015-02-17
[ [ "Xiong", "Maosheng", "" ], [ "Li", "Nian", "" ] ]
In this paper we extend the works \cite{gegeng2,XLZD} further in two directions and compute the weight distribution of these cyclic codes under more relaxed conditions. It is interesting to note that many cyclic codes in the family are optimal and have only a few non-zero weights. Besides using similar ideas from \cite{gegeng2,XLZD}, we carry out some subtle manipulation of certain exponential sums.
2102.08779
Emmanouil Papadogiannakis
Emmanouil Papadogiannakis, Panagiotis Papadopoulos, Nicolas Kourtellis and Evangelos P. Markatos
User Tracking in the Post-cookie Era: How Websites Bypass GDPR Consent to Track Users
12 pages, Published at The Web Conference 2021 (WWW 2021). Please cite the WWW version; Made source code publicly available
null
10.1145/3442381.3450056
null
cs.CY cs.CR
http://creativecommons.org/licenses/by/4.0/
During the past few years, mostly as a result of the GDPR and the CCPA, websites have started to present users with cookie consent banners. These banners are web forms where the users can state their preference and declare which cookies they would like to accept, if such option exists. Although requesting consent before storing any identifiable information is a good start towards respecting the user privacy, yet previous research has shown that websites do not always respect user choices. Furthermore, considering the ever decreasing reliance of trackers on cookies and actions browser vendors take by blocking or restricting third-party cookies, we anticipate a world where stateless tracking emerges, either because trackers or websites do not use cookies, or because users simply refuse to accept any. In this paper, we explore whether websites use more persistent and sophisticated forms of tracking in order to track users who said they do not want cookies. Such forms of tracking include first-party ID leaking, ID synchronization, and browser fingerprinting. Our results suggest that websites do use such modern forms of tracking even before users had the opportunity to register their choice with respect to cookies. To add insult to injury, when users choose to raise their voice and reject all cookies, user tracking only intensifies. As a result, users' choices play very little role with respect to tracking: we measured that more than 75% of tracking activities happened before users had the opportunity to make a selection in the cookie consent banner, or when users chose to reject all cookies.
[ { "created": "Wed, 17 Feb 2021 14:11:10 GMT", "version": "v1" }, { "created": "Thu, 10 Feb 2022 15:22:35 GMT", "version": "v2" } ]
2022-02-11
[ [ "Papadogiannakis", "Emmanouil", "" ], [ "Papadopoulos", "Panagiotis", "" ], [ "Kourtellis", "Nicolas", "" ], [ "Markatos", "Evangelos P.", "" ] ]
During the past few years, mostly as a result of the GDPR and the CCPA, websites have started to present users with cookie consent banners. These banners are web forms where the users can state their preference and declare which cookies they would like to accept, if such option exists. Although requesting consent before storing any identifiable information is a good start towards respecting the user privacy, yet previous research has shown that websites do not always respect user choices. Furthermore, considering the ever decreasing reliance of trackers on cookies and actions browser vendors take by blocking or restricting third-party cookies, we anticipate a world where stateless tracking emerges, either because trackers or websites do not use cookies, or because users simply refuse to accept any. In this paper, we explore whether websites use more persistent and sophisticated forms of tracking in order to track users who said they do not want cookies. Such forms of tracking include first-party ID leaking, ID synchronization, and browser fingerprinting. Our results suggest that websites do use such modern forms of tracking even before users had the opportunity to register their choice with respect to cookies. To add insult to injury, when users choose to raise their voice and reject all cookies, user tracking only intensifies. As a result, users' choices play very little role with respect to tracking: we measured that more than 75% of tracking activities happened before users had the opportunity to make a selection in the cookie consent banner, or when users chose to reject all cookies.
2211.11479
Charilaos Papaioannou
Charilaos Papaioannou, Ioannis Valiantzas, Theodoros Giannakopoulos, Maximos Kaliakatsos-Papakostas, Alexandros Potamianos
A Dataset for Greek Traditional and Folk Music: Lyra
null
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studying under-represented music traditions under the MIR scope is crucial, not only for developing novel analysis tools, but also for unveiling musical functions that might prove useful in studying world musics. This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data. The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre, among others. The content has been collected from a Greek documentary series that is available online, where academics present music traditions of Greece with live music and dance performance during the show, along with discussions about social, cultural and musicological aspects of the presented music. Therefore, this procedure has resulted in a significant wealth of descriptions regarding a variety of aspects, such as musical genre, places of origin and musical instruments. In addition, the audio recordings were performed under strict production-level specifications, in terms of recording equipment, leading to very clean and homogeneous audio content. In this work, apart from presenting the dataset in detail, we propose a baseline deep-learning classification approach to recognize the involved musicological attributes. The dataset, the baseline classification methods and the models are provided in public repositories. Future directions for further refining the dataset are also discussed.
[ { "created": "Mon, 21 Nov 2022 14:15:43 GMT", "version": "v1" } ]
2022-11-22
[ [ "Papaioannou", "Charilaos", "" ], [ "Valiantzas", "Ioannis", "" ], [ "Giannakopoulos", "Theodoros", "" ], [ "Kaliakatsos-Papakostas", "Maximos", "" ], [ "Potamianos", "Alexandros", "" ] ]
Studying under-represented music traditions under the MIR scope is crucial, not only for developing novel analysis tools, but also for unveiling musical functions that might prove useful in studying world musics. This paper presents a dataset for Greek Traditional and Folk music that includes 1570 pieces, summing in around 80 hours of data. The dataset incorporates YouTube timestamped links for retrieving audio and video, along with rich metadata information with regards to instrumentation, geography and genre, among others. The content has been collected from a Greek documentary series that is available online, where academics present music traditions of Greece with live music and dance performance during the show, along with discussions about social, cultural and musicological aspects of the presented music. Therefore, this procedure has resulted in a significant wealth of descriptions regarding a variety of aspects, such as musical genre, places of origin and musical instruments. In addition, the audio recordings were performed under strict production-level specifications, in terms of recording equipment, leading to very clean and homogeneous audio content. In this work, apart from presenting the dataset in detail, we propose a baseline deep-learning classification approach to recognize the involved musicological attributes. The dataset, the baseline classification methods and the models are provided in public repositories. Future directions for further refining the dataset are also discussed.
1411.5752
Bharath Hariharan
Bharath Hariharan and Pablo Arbel\'aez and Ross Girshick and Jitendra Malik
Hypercolumns for Object Segmentation and Fine-grained Localization
CVPR Camera ready
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as feature representation. However, the information in this layer may be too coarse to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation[22], where we improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint localization, where we get a 3.3 point boost over[20] and part labeling, where we show a 6.6 point gain over a strong baseline.
[ { "created": "Fri, 21 Nov 2014 03:12:33 GMT", "version": "v1" }, { "created": "Sat, 25 Apr 2015 23:08:59 GMT", "version": "v2" } ]
2015-04-28
[ [ "Hariharan", "Bharath", "" ], [ "Arbeláez", "Pablo", "" ], [ "Girshick", "Ross", "" ], [ "Malik", "Jitendra", "" ] ]
Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as feature representation. However, the information in this layer may be too coarse to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation[22], where we improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint localization, where we get a 3.3 point boost over[20] and part labeling, where we show a 6.6 point gain over a strong baseline.
1404.4661
Jiang Wang
Jiang Wang, Yang song, Thomas Leung, Chuck Rosenberg, Jinbin Wang, James Philbin, Bo Chen, Ying Wu
Learning Fine-grained Image Similarity with Deep Ranking
CVPR 2014
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images.It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.
[ { "created": "Thu, 17 Apr 2014 22:09:16 GMT", "version": "v1" } ]
2014-04-21
[ [ "Wang", "Jiang", "" ], [ "song", "Yang", "" ], [ "Leung", "Thomas", "" ], [ "Rosenberg", "Chuck", "" ], [ "Wang", "Jinbin", "" ], [ "Philbin", "James", "" ], [ "Chen", "Bo", "" ], [ "Wu", "Ying", "" ] ]
Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images.It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.
0909.1784
Stavros Harizopoulos
Stavros Harizopoulos (HP Labs), Mehul Shah, Justin Meza (UCLA), Parthasarathy Ranganathan (HP Labs)
Energy Efficiency: The New Holy Grail of Data Management Systems Research
CIDR 2009
null
null
null
cs.DB cs.PF
http://creativecommons.org/licenses/by/3.0/
Energy costs are quickly rising in large-scale data centers and are soon projected to overtake the cost of hardware. As a result, data center operators have recently started turning into using more energy-friendly hardware. Despite the growing body of research in power management techniques, there has been little work to date on energy efficiency from a data management software perspective. In this paper, we argue that hardware-only approaches are only part of the solution, and that data management software will be key in optimizing for energy efficiency. We discuss the problems arising from growing energy use in data centers and the trends that point to an increasing set of opportunities for software-level optimizations. Using two simple experiments, we illustrate the potential of such optimizations, and, motivated by these examples, we discuss general approaches for reducing energy waste. Lastly, we point out existing places within database systems that are promising for energy-efficiency optimizations and urge the data management systems community to shift focus from performance-oriented research to energy-efficient computing.
[ { "created": "Wed, 9 Sep 2009 18:10:39 GMT", "version": "v1" } ]
2009-09-15
[ [ "Harizopoulos", "Stavros", "", "HP Labs" ], [ "Shah", "Mehul", "", "UCLA" ], [ "Meza", "Justin", "", "UCLA" ], [ "Ranganathan", "Parthasarathy", "", "HP Labs" ] ]
Energy costs are quickly rising in large-scale data centers and are soon projected to overtake the cost of hardware. As a result, data center operators have recently started turning into using more energy-friendly hardware. Despite the growing body of research in power management techniques, there has been little work to date on energy efficiency from a data management software perspective. In this paper, we argue that hardware-only approaches are only part of the solution, and that data management software will be key in optimizing for energy efficiency. We discuss the problems arising from growing energy use in data centers and the trends that point to an increasing set of opportunities for software-level optimizations. Using two simple experiments, we illustrate the potential of such optimizations, and, motivated by these examples, we discuss general approaches for reducing energy waste. Lastly, we point out existing places within database systems that are promising for energy-efficiency optimizations and urge the data management systems community to shift focus from performance-oriented research to energy-efficient computing.
2006.14592
Guojun Zhang
Guojun Zhang, Kaiwen Wu, Pascal Poupart and Yaoliang Yu
Newton-type Methods for Minimax Optimization
code update
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential games, in particular two-player sequential zero-sum games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications, such as adversarial training, generative models and reinforcement learning. However, existing theory mostly focuses on convex-concave functions with few exceptions. In this work, we propose two novel Newton-type algorithms for nonconvex-nonconcave minimax optimization. We prove their local convergence at strict local minimax points, which are surrogates of global solutions. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to strict local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify the effectiveness of our Newton-type algorithms through experiments on training GANs which are intrinsically nonconvex and ill-conditioned. Our code is available at https://github.com/watml/min-max-2nd-order.
[ { "created": "Thu, 25 Jun 2020 17:38:00 GMT", "version": "v1" }, { "created": "Thu, 11 Feb 2021 01:54:34 GMT", "version": "v2" }, { "created": "Sat, 18 Feb 2023 23:10:02 GMT", "version": "v3" } ]
2023-02-21
[ [ "Zhang", "Guojun", "" ], [ "Wu", "Kaiwen", "" ], [ "Poupart", "Pascal", "" ], [ "Yu", "Yaoliang", "" ] ]
Differential games, in particular two-player sequential zero-sum games (a.k.a. minimax optimization), have been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications, such as adversarial training, generative models and reinforcement learning. However, existing theory mostly focuses on convex-concave functions with few exceptions. In this work, we propose two novel Newton-type algorithms for nonconvex-nonconcave minimax optimization. We prove their local convergence at strict local minimax points, which are surrogates of global solutions. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to strict local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify the effectiveness of our Newton-type algorithms through experiments on training GANs which are intrinsically nonconvex and ill-conditioned. Our code is available at https://github.com/watml/min-max-2nd-order.
2401.09064
Yanmo Hu
Yanmo Hu, Kai Wu, J. Andrew Zhang, Weibo Deng, and Y. Jay Guo
Performance Bounds and Optimization for CSI-Ratio based Bi-static Doppler Sensing in ISAC Systems
14 pages, 15 figures, journal paper
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Bi-static sensing is crucial for exploring the potential of networked sensing capabilities in integrated sensing and communications (ISAC). However, it suffers from the challenging clock asynchronism issue. CSI ratio-based sensing is an effective means to address the issue. Its performance bounds, particular for Doppler sensing, have not been fully understood yet. This work endeavors to fill the research gap. Focusing on a single dynamic path in high-SNR scenarios, we derive the closed-form CRB. Then, through analyzing the mutual interference between dynamic and static paths, we simplify the CRB results by deriving close approximations, further unveiling new insights of the impact of numerous physical parameters on Doppler sensing. Moreover, utilizing the new CRB and analyses, we propose novel waveform optimization strategies for noise- and interference-limited sensing scenarios, which are also empowered by closed-form and efficient solutions. Extensive simulation results are provided to validate the preciseness of the derived CRB results and analyses, with the aid of the maximum-likelihood estimator. The results also demonstrate the substantial enhanced Doppler sensing accuracy and the sensing capabilities for low-speed target achieved by the proposed waveform design.
[ { "created": "Wed, 17 Jan 2024 08:54:19 GMT", "version": "v1" } ]
2024-01-18
[ [ "Hu", "Yanmo", "" ], [ "Wu", "Kai", "" ], [ "Zhang", "J. Andrew", "" ], [ "Deng", "Weibo", "" ], [ "Guo", "Y. Jay", "" ] ]
Bi-static sensing is crucial for exploring the potential of networked sensing capabilities in integrated sensing and communications (ISAC). However, it suffers from the challenging clock asynchronism issue. CSI ratio-based sensing is an effective means to address the issue. Its performance bounds, particular for Doppler sensing, have not been fully understood yet. This work endeavors to fill the research gap. Focusing on a single dynamic path in high-SNR scenarios, we derive the closed-form CRB. Then, through analyzing the mutual interference between dynamic and static paths, we simplify the CRB results by deriving close approximations, further unveiling new insights of the impact of numerous physical parameters on Doppler sensing. Moreover, utilizing the new CRB and analyses, we propose novel waveform optimization strategies for noise- and interference-limited sensing scenarios, which are also empowered by closed-form and efficient solutions. Extensive simulation results are provided to validate the preciseness of the derived CRB results and analyses, with the aid of the maximum-likelihood estimator. The results also demonstrate the substantial enhanced Doppler sensing accuracy and the sensing capabilities for low-speed target achieved by the proposed waveform design.
1905.03989
Till Menzel
Till Menzel, Gerrit Bagschik, Leon Isensee, Andre Schomburg, Markus Maurer
From Functional to Logical Scenarios: Detailing a Keyword-Based Scenario Description for Execution in a Simulation Environment
Accepted at the 2019 IEEE Intelligent Vehicles Symposium, 8 pages, 7 figures
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scenario-based development and test processes are a promising approach for verifying and validating automated driving functions. For this purpose, scenarios have to be generated during the development process in a traceable manner. In early development stages, the operating scenarios of the item to be developed are usually described in an abstract, linguistic way.Within the scope of a simulation-assisted test process, these linguistically described scenarios have to be transformed into a state space representation and converted into data formats which can be used with the respective simulation environment. Currently, this step of detailing scenarios takes a considerable manual effort. Furthermore, a standardized interpretation of the linguistically described scenarios and a consistent transformation into the data formats are not guaranteed due to multiple authors as well as many constraints between the scenario parameters. In this paper, the authors present an approach to automatically detail a keyword-based scenario description for execution in a simulation environment and provide a basis for test case generation. As a first step, the keyword-based description is transformed into a parameter space representation. At the same time, constraints regarding the selection and combination of parameter values are documented for the following process steps (e. g. evolutionary or stochastic test methods). As a second step, the parameter space representation is converted into data formats required by the simulation environment. As an example, the authors use scenarios on German freeways and convert them into the data formats OpenDRIVE (description of the road) and OpenSCENARIO (description of traffic participants and environmental conditions) for execution in the simulation environment Virtual Test Drive.
[ { "created": "Fri, 10 May 2019 07:50:03 GMT", "version": "v1" } ]
2019-05-13
[ [ "Menzel", "Till", "" ], [ "Bagschik", "Gerrit", "" ], [ "Isensee", "Leon", "" ], [ "Schomburg", "Andre", "" ], [ "Maurer", "Markus", "" ] ]
Scenario-based development and test processes are a promising approach for verifying and validating automated driving functions. For this purpose, scenarios have to be generated during the development process in a traceable manner. In early development stages, the operating scenarios of the item to be developed are usually described in an abstract, linguistic way.Within the scope of a simulation-assisted test process, these linguistically described scenarios have to be transformed into a state space representation and converted into data formats which can be used with the respective simulation environment. Currently, this step of detailing scenarios takes a considerable manual effort. Furthermore, a standardized interpretation of the linguistically described scenarios and a consistent transformation into the data formats are not guaranteed due to multiple authors as well as many constraints between the scenario parameters. In this paper, the authors present an approach to automatically detail a keyword-based scenario description for execution in a simulation environment and provide a basis for test case generation. As a first step, the keyword-based description is transformed into a parameter space representation. At the same time, constraints regarding the selection and combination of parameter values are documented for the following process steps (e. g. evolutionary or stochastic test methods). As a second step, the parameter space representation is converted into data formats required by the simulation environment. As an example, the authors use scenarios on German freeways and convert them into the data formats OpenDRIVE (description of the road) and OpenSCENARIO (description of traffic participants and environmental conditions) for execution in the simulation environment Virtual Test Drive.
2206.13179
Yiyang Hao
Yiyang Hao (1), Ge Li (2), Yongqiang Liu (1), Xiaowei Miao (1), He Zong (1), Siyuan Jiang (1), Yang Liu (1), He Wei (1) ((1) aiXcoder, (2) Peking University)
AixBench: A Code Generation Benchmark Dataset
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
We present a benchmark dataset for evaluating method-level code generation task. The benchmark contains a dataset of 175 samples for automated evaluation and a dataset of 161 samples for manual evaluation. We also present a new metric for automatically evaluating the correctness of the generated code, and a set of criteria to manually evaluating the overall quality of the generated code.
[ { "created": "Mon, 27 Jun 2022 10:44:48 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2022 02:55:15 GMT", "version": "v2" } ]
2022-07-22
[ [ "Hao", "Yiyang", "" ], [ "Li", "Ge", "" ], [ "Liu", "Yongqiang", "" ], [ "Miao", "Xiaowei", "" ], [ "Zong", "He", "" ], [ "Jiang", "Siyuan", "" ], [ "Liu", "Yang", "" ], [ "Wei", "He", "" ] ]
We present a benchmark dataset for evaluating method-level code generation task. The benchmark contains a dataset of 175 samples for automated evaluation and a dataset of 161 samples for manual evaluation. We also present a new metric for automatically evaluating the correctness of the generated code, and a set of criteria to manually evaluating the overall quality of the generated code.
2104.07166
Patrick Phillips
David Narv\'aez and Patrick Phillips
On Lev Gordeev's "On P Versus NP"
This article has 6 pages and 1 figure. This work as supported in part by NSF grant CCF-2030859 to the Computing Research Association for the CIFellows Project and by NSF grant CCF-2006496
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
In the paper "On P versus NP," Lev Gordeev attempts to extend the method of approximation, which successfully proved exponential lower bounds for monotone circuits, to the case of De Morgan Normal (DMN) circuits. As in Razborov's proof of exponential lower bounds for monotone circuits, Gordeev's work is focused on the NP-complete problem CLIQUE. If successful in proving exponential DMN circuit lower bounds for CLIQUE, Gordeev would prove that P $\neq$ NP. However, we show that Gordeev makes a crucial mistake in Lemma 12. This mistake comes from only approximating operations over positive circuit inputs. Furthermore, we argue that efforts to extend the method of approximation to DMN circuits will need to approximate negated inputs as well.
[ { "created": "Wed, 14 Apr 2021 23:54:19 GMT", "version": "v1" } ]
2021-04-16
[ [ "Narváez", "David", "" ], [ "Phillips", "Patrick", "" ] ]
In the paper "On P versus NP," Lev Gordeev attempts to extend the method of approximation, which successfully proved exponential lower bounds for monotone circuits, to the case of De Morgan Normal (DMN) circuits. As in Razborov's proof of exponential lower bounds for monotone circuits, Gordeev's work is focused on the NP-complete problem CLIQUE. If successful in proving exponential DMN circuit lower bounds for CLIQUE, Gordeev would prove that P $\neq$ NP. However, we show that Gordeev makes a crucial mistake in Lemma 12. This mistake comes from only approximating operations over positive circuit inputs. Furthermore, we argue that efforts to extend the method of approximation to DMN circuits will need to approximate negated inputs as well.
2301.11432
Albrecht Kurze
Albrecht Kurze
Emotional Interaction Qualities: Vocabulary, Modalities, Actions, And Mapping
In Workshop The Future of Emotion in Human-Computer Interaction at Conference on Human Factors in Computing Systems (CHI22). April 13-14, 2022. 4 pages
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Have you ever typed particularly powerful on your keyboard, maybe even harsh, to write and send a message with some emphasis of your emotional state or message? Did it work? Probably not. It didn't affect how you typed or interacted with your mouse. But what if you had other, connected devices, with other modalities for inputs and outputs? Which would you have chosen, and how would you characterize your interactions with them? We researched with our multisensory and multimodal tool, the Loaded Dice, in co-design workshops the design space of IoT usage scenarios: what interaction qualities users want, characterized using an interaction vocabulary, and how they might map them to a selection of sensors and actuators. We discuss based on our experience some thoughts of such a mapping.
[ { "created": "Thu, 26 Jan 2023 21:37:54 GMT", "version": "v1" } ]
2023-01-30
[ [ "Kurze", "Albrecht", "" ] ]
Have you ever typed particularly powerful on your keyboard, maybe even harsh, to write and send a message with some emphasis of your emotional state or message? Did it work? Probably not. It didn't affect how you typed or interacted with your mouse. But what if you had other, connected devices, with other modalities for inputs and outputs? Which would you have chosen, and how would you characterize your interactions with them? We researched with our multisensory and multimodal tool, the Loaded Dice, in co-design workshops the design space of IoT usage scenarios: what interaction qualities users want, characterized using an interaction vocabulary, and how they might map them to a selection of sensors and actuators. We discuss based on our experience some thoughts of such a mapping.
2401.12071
Corentin Ferry
Corentin Ferry and Nicolas Derumigny and Steven Derrien and Sanjay Rajopadhye
An Irredundant and Compressed Data Layout to Optimize Bandwidth Utilization of FPGA Accelerators
11 pages, 11 figures, 2 tables
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memory bandwidth is known to be a performance bottleneck for FPGA accelerators, especially when they deal with large multi-dimensional data-sets. A large body of work focuses on reducing of off-chip transfers, but few authors try to improve the efficiency of transfers. This paper addresses the later issue by proposing (i) a compiler-based approach to accelerator's data layout to maximize contiguous access to off-chip memory, and (ii) data packing and runtime compression techniques that take advantage of this layout to further improve memory performance. We show that our approach can decrease the I/O cycles up to $7\times$ compared to un-optimized memory accesses.
[ { "created": "Mon, 22 Jan 2024 16:11:11 GMT", "version": "v1" } ]
2024-01-23
[ [ "Ferry", "Corentin", "" ], [ "Derumigny", "Nicolas", "" ], [ "Derrien", "Steven", "" ], [ "Rajopadhye", "Sanjay", "" ] ]
Memory bandwidth is known to be a performance bottleneck for FPGA accelerators, especially when they deal with large multi-dimensional data-sets. A large body of work focuses on reducing of off-chip transfers, but few authors try to improve the efficiency of transfers. This paper addresses the later issue by proposing (i) a compiler-based approach to accelerator's data layout to maximize contiguous access to off-chip memory, and (ii) data packing and runtime compression techniques that take advantage of this layout to further improve memory performance. We show that our approach can decrease the I/O cycles up to $7\times$ compared to un-optimized memory accesses.
2308.12305
Haokun Chen
Haokun Chen, Yao Zhang, Denis Krompass, Jindong Gu, Volker Tresp
FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning
null
null
null
null
cs.LG cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, foundation models have exhibited remarkable advancements in multi-modal learning. These models, equipped with millions (or billions) of parameters, typically require a substantial amount of data for finetuning. However, collecting and centralizing training data from diverse sectors becomes challenging due to distinct privacy regulations. Federated Learning (FL) emerges as a promising solution, enabling multiple clients to collaboratively train neural networks without centralizing their local data. To alleviate client computation burdens and communication overheads, previous works have adapted Parameter-efficient Finetuning (PEFT) methods for FL. Hereby, only a small fraction of the model parameters are optimized and communicated during federated communications. Nevertheless, most previous works have focused on a single modality and neglected one common phenomenon, i.e., the presence of data heterogeneity across the clients. Therefore, in this work, we propose a finetuning framework tailored to heterogeneous multi-modal FL, called Federated Dual-Aadapter Teacher (FedDAT). Specifically, our approach leverages a Dual-Adapter Teacher (DAT) to address data heterogeneity by regularizing the client local updates and applying Mutual Knowledge Distillation (MKD) for an efficient knowledge transfer. FedDAT is the first approach that enables an efficient distributed finetuning of foundation models for a variety of heterogeneous Vision-Language tasks. To demonstrate its effectiveness, we conduct extensive experiments on four multi-modality FL benchmarks with different types of data heterogeneity, where FedDAT substantially outperforms the existing centralized PEFT methods adapted for FL.
[ { "created": "Mon, 21 Aug 2023 21:57:01 GMT", "version": "v1" } ]
2023-08-25
[ [ "Chen", "Haokun", "" ], [ "Zhang", "Yao", "" ], [ "Krompass", "Denis", "" ], [ "Gu", "Jindong", "" ], [ "Tresp", "Volker", "" ] ]
Recently, foundation models have exhibited remarkable advancements in multi-modal learning. These models, equipped with millions (or billions) of parameters, typically require a substantial amount of data for finetuning. However, collecting and centralizing training data from diverse sectors becomes challenging due to distinct privacy regulations. Federated Learning (FL) emerges as a promising solution, enabling multiple clients to collaboratively train neural networks without centralizing their local data. To alleviate client computation burdens and communication overheads, previous works have adapted Parameter-efficient Finetuning (PEFT) methods for FL. Hereby, only a small fraction of the model parameters are optimized and communicated during federated communications. Nevertheless, most previous works have focused on a single modality and neglected one common phenomenon, i.e., the presence of data heterogeneity across the clients. Therefore, in this work, we propose a finetuning framework tailored to heterogeneous multi-modal FL, called Federated Dual-Aadapter Teacher (FedDAT). Specifically, our approach leverages a Dual-Adapter Teacher (DAT) to address data heterogeneity by regularizing the client local updates and applying Mutual Knowledge Distillation (MKD) for an efficient knowledge transfer. FedDAT is the first approach that enables an efficient distributed finetuning of foundation models for a variety of heterogeneous Vision-Language tasks. To demonstrate its effectiveness, we conduct extensive experiments on four multi-modality FL benchmarks with different types of data heterogeneity, where FedDAT substantially outperforms the existing centralized PEFT methods adapted for FL.
2106.11929
Zhiqiang Gong
Zhiqiang Gong and Weien Zhou and Jun Zhang and Wei Peng and Wen Yao
Joint Deep Reversible Regression Model and Physics-Informed Unsupervised Learning for Temperature Field Reconstruction
Accepted by Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence, 2023
10.1016/j.engappai.2022.105686
null
cs.LG cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Temperature monitoring during the life time of heat source components in engineering systems becomes essential to guarantee the normal work and the working life of these components. However, prior methods, which mainly use the interpolate estimation to reconstruct the temperature field from limited monitoring points, require large amounts of temperature tensors for an accurate estimation. This may decrease the availability and reliability of the system and sharply increase the monitoring cost. To solve this problem, this work develops a novel physics-informed deep reversible regression models for temperature field reconstruction of heat-source systems (TFR-HSS), which can better reconstruct the temperature field with limited monitoring points unsupervisedly. First, we define the TFR-HSS task mathematically, and numerically model the task, and hence transform the task as an image-to-image regression problem. Then this work develops the deep reversible regression model which can better learn the physical information, especially over the boundary. Finally, considering the physical characteristics of heat conduction as well as the boundary conditions, this work proposes the physics-informed reconstruction loss including four training losses and jointly learns the deep surrogate model with these losses unsupervisedly. Experimental studies have conducted over typical two-dimensional heat-source systems to demonstrate the effectiveness of the proposed method.
[ { "created": "Tue, 22 Jun 2021 17:01:53 GMT", "version": "v1" }, { "created": "Thu, 24 Jun 2021 03:25:01 GMT", "version": "v2" }, { "created": "Mon, 5 Jul 2021 02:58:16 GMT", "version": "v3" }, { "created": "Thu, 5 May 2022 01:28:24 GMT", "version": "v4" }, { "created": "Tue, 29 Nov 2022 06:57:30 GMT", "version": "v5" } ]
2022-12-27
[ [ "Gong", "Zhiqiang", "" ], [ "Zhou", "Weien", "" ], [ "Zhang", "Jun", "" ], [ "Peng", "Wei", "" ], [ "Yao", "Wen", "" ] ]
Temperature monitoring during the life time of heat source components in engineering systems becomes essential to guarantee the normal work and the working life of these components. However, prior methods, which mainly use the interpolate estimation to reconstruct the temperature field from limited monitoring points, require large amounts of temperature tensors for an accurate estimation. This may decrease the availability and reliability of the system and sharply increase the monitoring cost. To solve this problem, this work develops a novel physics-informed deep reversible regression models for temperature field reconstruction of heat-source systems (TFR-HSS), which can better reconstruct the temperature field with limited monitoring points unsupervisedly. First, we define the TFR-HSS task mathematically, and numerically model the task, and hence transform the task as an image-to-image regression problem. Then this work develops the deep reversible regression model which can better learn the physical information, especially over the boundary. Finally, considering the physical characteristics of heat conduction as well as the boundary conditions, this work proposes the physics-informed reconstruction loss including four training losses and jointly learns the deep surrogate model with these losses unsupervisedly. Experimental studies have conducted over typical two-dimensional heat-source systems to demonstrate the effectiveness of the proposed method.
2204.11062
Jie Yan
Jie Yan, Xin Liu, Ji Qi, Tao You and Zhong-Yuan Zhang
Selective clustering ensemble based on kappa and F-score
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering ensemble has an impressive performance in improving the accuracy and robustness of partition results and has received much attention in recent years. Selective clustering ensemble (SCE) can further improve the ensemble performance by selecting base partitions or clusters in according to diversity and stability. However, there is a conflict between diversity and stability, and how to make the trade-off between the two is challenging. The key here is how to evaluate the quality of the base partitions and clusters. In this paper, we propose a new evaluation method for partitions and clusters using kappa and F-score, leading to a new SCE method, which uses kappa to select informative base partitions and uses F-score to weight clusters based on stability. The effectiveness and efficiency of the proposed method is empirically validated over real datasets.
[ { "created": "Sat, 23 Apr 2022 12:34:32 GMT", "version": "v1" } ]
2022-04-26
[ [ "Yan", "Jie", "" ], [ "Liu", "Xin", "" ], [ "Qi", "Ji", "" ], [ "You", "Tao", "" ], [ "Zhang", "Zhong-Yuan", "" ] ]
Clustering ensemble has an impressive performance in improving the accuracy and robustness of partition results and has received much attention in recent years. Selective clustering ensemble (SCE) can further improve the ensemble performance by selecting base partitions or clusters in according to diversity and stability. However, there is a conflict between diversity and stability, and how to make the trade-off between the two is challenging. The key here is how to evaluate the quality of the base partitions and clusters. In this paper, we propose a new evaluation method for partitions and clusters using kappa and F-score, leading to a new SCE method, which uses kappa to select informative base partitions and uses F-score to weight clusters based on stability. The effectiveness and efficiency of the proposed method is empirically validated over real datasets.
2307.07710
Yufei Wang
Yufei Wang, Yi Yu, Wenhan Yang, Lanqing Guo, Lap-Pui Chau, Alex C. Kot, Bihan Wen
ExposureDiffusion: Learning to Expose for Low-light Image Enhancement
accepted by ICCV2023
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.
[ { "created": "Sat, 15 Jul 2023 04:48:35 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 08:23:21 GMT", "version": "v2" } ]
2023-08-16
[ [ "Wang", "Yufei", "" ], [ "Yu", "Yi", "" ], [ "Yang", "Wenhan", "" ], [ "Guo", "Lanqing", "" ], [ "Chau", "Lap-Pui", "" ], [ "Kot", "Alex C.", "" ], [ "Wen", "Bihan", "" ] ]
Previous raw image-based low-light image enhancement methods predominantly relied on feed-forward neural networks to learn deterministic mappings from low-light to normally-exposed images. However, they failed to capture critical distribution information, leading to visually undesirable results. This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model. Different from a vanilla diffusion model that has to perform Gaussian denoising, with the injected physics-based exposure model, our restoration process can directly start from a noisy image instead of pure noise. As such, our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models. To make full use of the advantages of different intermediate steps, we further propose an adaptive residual layer that effectively screens out the side-effect in the iterative refinement when the intermediate results have been already well-exposed. The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks. Note that, the proposed framework is compatible with real-paired datasets, real/synthetic noise models, and different backbone networks. We evaluate the proposed method on various public benchmarks, achieving promising results with consistent improvements using different exposure models and backbones. Besides, the proposed method achieves better generalization capacity for unseen amplifying ratios and better performance than a larger feedforward neural model when few parameters are adopted.
2206.02658
Adam Aviv
David G. Balash and Mir Masood Ali and Xiaoyuan Wu and Chris Kanich and Adam J. Aviv
Longitudinal Analysis of Privacy Labels in the Apple App Store
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
In December of 2020, Apple started to require app developers to self-report privacy label annotations on their apps indicating what data is collected and how it is used.To understand the adoption and shifts in privacy labels in the App Store, we collected nearly weekly snapshots of over 1.6 million apps for over a year (July 15, 2021 -- October 25, 2022) to understand the dynamics of privacy label ecosystem. Nearly two years after privacy labels launched, only 70.1% of apps have privacy labels, but we observed an increase of 28% during the measurement period. Privacy label adoption rates are mostly driven by new apps rather than older apps coming into compliance. Of apps with labels, 18.1% collect data used to track users, 38.1% collect data that is linked to a user identity, and 42.0% collect data that is not linked. A surprisingly large share (41.8%) of apps with labels indicate that they do not collect any data, and while we do not perform direct analysis of the apps to verify this claim, we observe that it is likely that many of these apps are choosing a Does Not Collect label due to being forced to select a label, rather than this being the true behavior of the app. Moreover, for apps that have assigned labels during the measurement period nearly all do not change their labels, and when they do, the new labels indicate more data collection than less. This suggests that privacy labels may be a ``set once'' mechanism for developers that may not actually provide users with the clarity needed to make informed privacy decisions.
[ { "created": "Mon, 6 Jun 2022 14:51:44 GMT", "version": "v1" }, { "created": "Wed, 29 Mar 2023 20:22:17 GMT", "version": "v2" } ]
2023-03-31
[ [ "Balash", "David G.", "" ], [ "Ali", "Mir Masood", "" ], [ "Wu", "Xiaoyuan", "" ], [ "Kanich", "Chris", "" ], [ "Aviv", "Adam J.", "" ] ]
In December of 2020, Apple started to require app developers to self-report privacy label annotations on their apps indicating what data is collected and how it is used.To understand the adoption and shifts in privacy labels in the App Store, we collected nearly weekly snapshots of over 1.6 million apps for over a year (July 15, 2021 -- October 25, 2022) to understand the dynamics of privacy label ecosystem. Nearly two years after privacy labels launched, only 70.1% of apps have privacy labels, but we observed an increase of 28% during the measurement period. Privacy label adoption rates are mostly driven by new apps rather than older apps coming into compliance. Of apps with labels, 18.1% collect data used to track users, 38.1% collect data that is linked to a user identity, and 42.0% collect data that is not linked. A surprisingly large share (41.8%) of apps with labels indicate that they do not collect any data, and while we do not perform direct analysis of the apps to verify this claim, we observe that it is likely that many of these apps are choosing a Does Not Collect label due to being forced to select a label, rather than this being the true behavior of the app. Moreover, for apps that have assigned labels during the measurement period nearly all do not change their labels, and when they do, the new labels indicate more data collection than less. This suggests that privacy labels may be a ``set once'' mechanism for developers that may not actually provide users with the clarity needed to make informed privacy decisions.
2406.16001
Hu Gao
Hu Gao and Depeng Dang
Learning Accurate and Enriched Features for Stereo Image Super-Resolution
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stereo image super-resolution (stereoSR) aims to enhance the quality of super-resolution results by incorporating complementary information from an alternative view. Although current methods have shown significant advancements, they typically operate on representations at full resolution to preserve spatial details, facing challenges in accurately capturing contextual information. Simultaneously, they utilize all feature similarities to cross-fuse information from the two views, potentially disregarding the impact of irrelevant information. To overcome this problem, we propose a mixed-scale selective fusion network (MSSFNet) to preserve precise spatial details and incorporate abundant contextual information, and adaptively select and fuse most accurate features from two views to enhance the promotion of high-quality stereoSR. Specifically, we develop a mixed-scale block (MSB) that obtains contextually enriched feature representations across multiple spatial scales while preserving precise spatial details. Furthermore, to dynamically retain the most essential cross-view information, we design a selective fusion attention module (SFAM) that searches and transfers the most accurate features from another view. To learn an enriched set of local and non-local features, we introduce a fast fourier convolution block (FFCB) to explicitly integrate frequency domain knowledge. Extensive experiments show that MSSFNet achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
[ { "created": "Sun, 23 Jun 2024 03:34:17 GMT", "version": "v1" } ]
2024-06-25
[ [ "Gao", "Hu", "" ], [ "Dang", "Depeng", "" ] ]
Stereo image super-resolution (stereoSR) aims to enhance the quality of super-resolution results by incorporating complementary information from an alternative view. Although current methods have shown significant advancements, they typically operate on representations at full resolution to preserve spatial details, facing challenges in accurately capturing contextual information. Simultaneously, they utilize all feature similarities to cross-fuse information from the two views, potentially disregarding the impact of irrelevant information. To overcome this problem, we propose a mixed-scale selective fusion network (MSSFNet) to preserve precise spatial details and incorporate abundant contextual information, and adaptively select and fuse most accurate features from two views to enhance the promotion of high-quality stereoSR. Specifically, we develop a mixed-scale block (MSB) that obtains contextually enriched feature representations across multiple spatial scales while preserving precise spatial details. Furthermore, to dynamically retain the most essential cross-view information, we design a selective fusion attention module (SFAM) that searches and transfers the most accurate features from another view. To learn an enriched set of local and non-local features, we introduce a fast fourier convolution block (FFCB) to explicitly integrate frequency domain knowledge. Extensive experiments show that MSSFNet achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
1804.00907
Harshan Jagadeesh
J. Harshan and Yih-Chun Hu
Cognitive Radio from Hell: Flipping Attack on Direct-Sequence Spread Spectrum
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a strong adversarial attack, referred to as the flipping attack, on Direct-Sequence Spread Spectrum (DSSS) systems. In this attack, the attacker, which is appropriately positioned between the transmitter and the receiver, instantaneously flips the transmitted symbols in the air at 50% rate, thereby driving the channel capacity to zero. Unlike the traditional jamming attack, this attack, when perfectly executed, cannot be detected at the receiver using signal-to-noise-ratio measurements. However, this attack necessitates the attacker to perfectly know the realizations of all the channels in the model. We first introduce the consequences of the flipping attack on narrowband frequency-flat channels, and subsequently discuss its feasibility in wideband frequency-selective channels. From the legitimate users' perspective, we present a method to detect this attack and also propose heuristics to improve the error-performance under the attack. We emphasize that future cyber-physical systems that employ DSSS should design transceivers to detect the proposed flipping attack, and then apply appropriate countermeasures.
[ { "created": "Tue, 3 Apr 2018 10:49:03 GMT", "version": "v1" } ]
2018-04-04
[ [ "Harshan", "J.", "" ], [ "Hu", "Yih-Chun", "" ] ]
In this paper, we introduce a strong adversarial attack, referred to as the flipping attack, on Direct-Sequence Spread Spectrum (DSSS) systems. In this attack, the attacker, which is appropriately positioned between the transmitter and the receiver, instantaneously flips the transmitted symbols in the air at 50% rate, thereby driving the channel capacity to zero. Unlike the traditional jamming attack, this attack, when perfectly executed, cannot be detected at the receiver using signal-to-noise-ratio measurements. However, this attack necessitates the attacker to perfectly know the realizations of all the channels in the model. We first introduce the consequences of the flipping attack on narrowband frequency-flat channels, and subsequently discuss its feasibility in wideband frequency-selective channels. From the legitimate users' perspective, we present a method to detect this attack and also propose heuristics to improve the error-performance under the attack. We emphasize that future cyber-physical systems that employ DSSS should design transceivers to detect the proposed flipping attack, and then apply appropriate countermeasures.
1705.01454
Yu-Sung Tu
Yu-Sung Tu, Wei-Torng Juang
The Payoff Region of a Strategic Game and Its Extreme Points
null
null
null
null
cs.GT q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The range of a payoff function for an $n$-player finite strategic game is investigated using a novel approach, the notion of extreme points of a non-convex set. The shape of a noncooperative payoff region can be estimated using extreme points and supporting hyperplanes of the cooperative payoff region. A basic structural characteristic of a noncooperative payoff region is that any of its subregions must be non-strictly convex if the subregion contains a relative neighborhood of a point on its boundary. Besides, applying the properties of extreme points of a noncooperative payoff region is a simple and effective way to prove some results about Pareto efficiency and social efficiency in game theory.
[ { "created": "Wed, 3 May 2017 14:42:41 GMT", "version": "v1" }, { "created": "Tue, 9 May 2017 09:40:10 GMT", "version": "v2" }, { "created": "Mon, 6 Aug 2018 07:00:30 GMT", "version": "v3" } ]
2018-08-07
[ [ "Tu", "Yu-Sung", "" ], [ "Juang", "Wei-Torng", "" ] ]
The range of a payoff function for an $n$-player finite strategic game is investigated using a novel approach, the notion of extreme points of a non-convex set. The shape of a noncooperative payoff region can be estimated using extreme points and supporting hyperplanes of the cooperative payoff region. A basic structural characteristic of a noncooperative payoff region is that any of its subregions must be non-strictly convex if the subregion contains a relative neighborhood of a point on its boundary. Besides, applying the properties of extreme points of a noncooperative payoff region is a simple and effective way to prove some results about Pareto efficiency and social efficiency in game theory.
2404.06860
Fulong Ma
Fulong Ma, Weiqing Qi, Guoyang Zhao, Linwei Zheng, Sheng Wang, Yuxuan Liu and Ming Liu
Monocular 3D lane detection for Autonomous Driving: Recent Achievements, Challenges, and Outlooks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D lane detection is essential in autonomous driving as it extracts structural and traffic information from the road in three-dimensional space, aiding self-driving cars in logical, safe, and comfortable path planning and motion control. Given the cost of sensors and the advantages of visual data in color information, 3D lane detection based on monocular vision is an important research direction in the realm of autonomous driving, increasingly gaining attention in both industry and academia. Regrettably, recent advancements in visual perception seem inadequate for the development of fully reliable 3D lane detection algorithms, which also hampers the progress of vision-based fully autonomous vehicles. We believe that there is still considerable room for improvement in 3D lane detection algorithms for autonomous vehicles using visual sensors, and significant enhancements are needed. This review looks back and analyzes the current state of achievements in the field of 3D lane detection research. It covers all current monocular-based 3D lane detection processes, discusses the performance of these cutting-edge algorithms, analyzes the time complexity of various algorithms, and highlights the main achievements and limitations of ongoing research efforts. The survey also includes a comprehensive discussion of available 3D lane detection datasets and the challenges that researchers face but have not yet resolved. Finally, our work outlines future research directions and invites researchers and practitioners to join this exciting field.
[ { "created": "Wed, 10 Apr 2024 09:35:50 GMT", "version": "v1" }, { "created": "Fri, 19 Apr 2024 13:18:46 GMT", "version": "v2" } ]
2024-04-22
[ [ "Ma", "Fulong", "" ], [ "Qi", "Weiqing", "" ], [ "Zhao", "Guoyang", "" ], [ "Zheng", "Linwei", "" ], [ "Wang", "Sheng", "" ], [ "Liu", "Yuxuan", "" ], [ "Liu", "Ming", "" ] ]
3D lane detection is essential in autonomous driving as it extracts structural and traffic information from the road in three-dimensional space, aiding self-driving cars in logical, safe, and comfortable path planning and motion control. Given the cost of sensors and the advantages of visual data in color information, 3D lane detection based on monocular vision is an important research direction in the realm of autonomous driving, increasingly gaining attention in both industry and academia. Regrettably, recent advancements in visual perception seem inadequate for the development of fully reliable 3D lane detection algorithms, which also hampers the progress of vision-based fully autonomous vehicles. We believe that there is still considerable room for improvement in 3D lane detection algorithms for autonomous vehicles using visual sensors, and significant enhancements are needed. This review looks back and analyzes the current state of achievements in the field of 3D lane detection research. It covers all current monocular-based 3D lane detection processes, discusses the performance of these cutting-edge algorithms, analyzes the time complexity of various algorithms, and highlights the main achievements and limitations of ongoing research efforts. The survey also includes a comprehensive discussion of available 3D lane detection datasets and the challenges that researchers face but have not yet resolved. Finally, our work outlines future research directions and invites researchers and practitioners to join this exciting field.
2307.11879
Jorge L\'opez
Jorge L\'opez, Charalampos Chatzinakis, Marc Cartigny and Claude Poletti
Software defined networking flow admission and routing under minimal security constraints
8 pages, 10 figures, as submitted to TRUSTCOM23
null
null
null
cs.NI cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, computer networks and telecommunications in general have been shifting paradigms to adopt software-centric approaches. Software Defined Networking (SDN) is one of such paradigms that centralizes control and intelligent applications can be defined on top of this architecture. The latter enables the definition of the network behavior by means of software. In this work, we propose an approach for Flow Admission and Routing under Minimal Security Constraints (FARSec) in Software Defined Networks, where network flows must use links which are at least as secure as their required security level. We prove that FARSec can find feasible paths that respect the minimum level of security for each flow. If the latter is not possible FARSec rejects the flow in order not to compromise its security. We show that the computational complexity of the proposed approach is polynomial. Experimental results with semi-random generated graphs confirm the efficiency and correctness of the proposed approach. Finally, we implement the proposed solution using OpenFlow and ONOS -- an SDN open-source controller. We validate its functionality using an emulated network with various security levels.
[ { "created": "Fri, 21 Jul 2023 19:36:05 GMT", "version": "v1" } ]
2023-07-25
[ [ "López", "Jorge", "" ], [ "Chatzinakis", "Charalampos", "" ], [ "Cartigny", "Marc", "" ], [ "Poletti", "Claude", "" ] ]
In recent years, computer networks and telecommunications in general have been shifting paradigms to adopt software-centric approaches. Software Defined Networking (SDN) is one of such paradigms that centralizes control and intelligent applications can be defined on top of this architecture. The latter enables the definition of the network behavior by means of software. In this work, we propose an approach for Flow Admission and Routing under Minimal Security Constraints (FARSec) in Software Defined Networks, where network flows must use links which are at least as secure as their required security level. We prove that FARSec can find feasible paths that respect the minimum level of security for each flow. If the latter is not possible FARSec rejects the flow in order not to compromise its security. We show that the computational complexity of the proposed approach is polynomial. Experimental results with semi-random generated graphs confirm the efficiency and correctness of the proposed approach. Finally, we implement the proposed solution using OpenFlow and ONOS -- an SDN open-source controller. We validate its functionality using an emulated network with various security levels.
2404.03518
Sichen Chen
Sichen Chen, Yingyi Zhang, Siming Huang, Ran Yi, Ke Fan, Ruixin Zhang, Peixian Chen, Jun Wang, Shouhong Ding, Lizhuang Ma
SDPose: Tokenized Pose Estimation via Circulation-Guide Self-Distillation
Accepted by CVPR 2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, transformer-based methods have achieved state-of-the-art prediction quality on human pose estimation(HPE). Nonetheless, most of these top-performing transformer-based models are too computation-consuming and storage-demanding to deploy on edge computing platforms. Those transformer-based models that require fewer resources are prone to under-fitting due to their smaller scale and thus perform notably worse than their larger counterparts. Given this conundrum, we introduce SDPose, a new self-distillation method for improving the performance of small transformer-based models. To mitigate the problem of under-fitting, we design a transformer module named Multi-Cycled Transformer(MCT) based on multiple-cycled forwards to more fully exploit the potential of small model parameters. Further, in order to prevent the additional inference compute-consuming brought by MCT, we introduce a self-distillation scheme, extracting the knowledge from the MCT module to a naive forward model. Specifically, on the MSCOCO validation dataset, SDPose-T obtains 69.7% mAP with 4.4M parameters and 1.8 GFLOPs. Furthermore, SDPose-S-V2 obtains 73.5% mAP on the MSCOCO validation dataset with 6.2M parameters and 4.7 GFLOPs, achieving a new state-of-the-art among predominant tiny neural network methods. Our code is available at https://github.com/MartyrPenink/SDPose.
[ { "created": "Thu, 4 Apr 2024 15:23:14 GMT", "version": "v1" } ]
2024-04-05
[ [ "Chen", "Sichen", "" ], [ "Zhang", "Yingyi", "" ], [ "Huang", "Siming", "" ], [ "Yi", "Ran", "" ], [ "Fan", "Ke", "" ], [ "Zhang", "Ruixin", "" ], [ "Chen", "Peixian", "" ], [ "Wang", "Jun", "" ], [ "Ding", "Shouhong", "" ], [ "Ma", "Lizhuang", "" ] ]
Recently, transformer-based methods have achieved state-of-the-art prediction quality on human pose estimation(HPE). Nonetheless, most of these top-performing transformer-based models are too computation-consuming and storage-demanding to deploy on edge computing platforms. Those transformer-based models that require fewer resources are prone to under-fitting due to their smaller scale and thus perform notably worse than their larger counterparts. Given this conundrum, we introduce SDPose, a new self-distillation method for improving the performance of small transformer-based models. To mitigate the problem of under-fitting, we design a transformer module named Multi-Cycled Transformer(MCT) based on multiple-cycled forwards to more fully exploit the potential of small model parameters. Further, in order to prevent the additional inference compute-consuming brought by MCT, we introduce a self-distillation scheme, extracting the knowledge from the MCT module to a naive forward model. Specifically, on the MSCOCO validation dataset, SDPose-T obtains 69.7% mAP with 4.4M parameters and 1.8 GFLOPs. Furthermore, SDPose-S-V2 obtains 73.5% mAP on the MSCOCO validation dataset with 6.2M parameters and 4.7 GFLOPs, achieving a new state-of-the-art among predominant tiny neural network methods. Our code is available at https://github.com/MartyrPenink/SDPose.
2103.02761
Mayee F. Chen
Mayee F. Chen, Benjamin Cohen-Wang, Stephen Mussmann, Frederic Sala, Christopher R\'e
Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation
To appear in AISTATS 2021
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Labeling data for modern machine learning is expensive and time-consuming. Latent variable models can be used to infer labels from weaker, easier-to-acquire sources operating on unlabeled data. Such models can also be trained using labeled data, presenting a key question: should a user invest in few labeled or many unlabeled points? We answer this via a framework centered on model misspecification in method-of-moments latent variable estimation. Our core result is a bias-variance decomposition of the generalization error, which shows that the unlabeled-only approach incurs additional bias under misspecification. We then introduce a correction that provably removes this bias in certain cases. We apply our decomposition framework to three scenarios -- well-specified, misspecified, and corrected models -- to 1) choose between labeled and unlabeled data and 2) learn from their combination. We observe theoretically and with synthetic experiments that for well-specified models, labeled points are worth a constant factor more than unlabeled points. With misspecification, however, their relative value is higher due to the additional bias but can be reduced with correction. We also apply our approach to study real-world weak supervision techniques for dataset construction.
[ { "created": "Wed, 3 Mar 2021 23:52:38 GMT", "version": "v1" } ]
2021-03-05
[ [ "Chen", "Mayee F.", "" ], [ "Cohen-Wang", "Benjamin", "" ], [ "Mussmann", "Stephen", "" ], [ "Sala", "Frederic", "" ], [ "Ré", "Christopher", "" ] ]
Labeling data for modern machine learning is expensive and time-consuming. Latent variable models can be used to infer labels from weaker, easier-to-acquire sources operating on unlabeled data. Such models can also be trained using labeled data, presenting a key question: should a user invest in few labeled or many unlabeled points? We answer this via a framework centered on model misspecification in method-of-moments latent variable estimation. Our core result is a bias-variance decomposition of the generalization error, which shows that the unlabeled-only approach incurs additional bias under misspecification. We then introduce a correction that provably removes this bias in certain cases. We apply our decomposition framework to three scenarios -- well-specified, misspecified, and corrected models -- to 1) choose between labeled and unlabeled data and 2) learn from their combination. We observe theoretically and with synthetic experiments that for well-specified models, labeled points are worth a constant factor more than unlabeled points. With misspecification, however, their relative value is higher due to the additional bias but can be reduced with correction. We also apply our approach to study real-world weak supervision techniques for dataset construction.
2010.12730
Gustavo Aguilar
Gustavo Aguilar, Bryan McCann, Tong Niu, Nazneen Rajani, Nitish Keskar, Thamar Solorio
Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality
Findings of EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to handle infrequent spellings. On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably long sequences and make it harder for the model to learn meaningful words. To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed--and thus, providing a practical method. Finally, we show that incorporating our module to mBERT significantly improves the performance on the social media linguistic code-switching evaluation (LinCE) benchmark.
[ { "created": "Sat, 24 Oct 2020 01:08:28 GMT", "version": "v1" }, { "created": "Sun, 4 Apr 2021 17:17:23 GMT", "version": "v2" }, { "created": "Fri, 24 Sep 2021 02:09:51 GMT", "version": "v3" } ]
2021-09-27
[ [ "Aguilar", "Gustavo", "" ], [ "McCann", "Bryan", "" ], [ "Niu", "Tong", "" ], [ "Rajani", "Nazneen", "" ], [ "Keskar", "Nitish", "" ], [ "Solorio", "Thamar", "" ] ]
Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to handle infrequent spellings. On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably long sequences and make it harder for the model to learn meaningful words. To alleviate these challenges, we propose a character-based subword module (char2subword) that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a drop-in replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed--and thus, providing a practical method. Finally, we show that incorporating our module to mBERT significantly improves the performance on the social media linguistic code-switching evaluation (LinCE) benchmark.
2308.01404
Aidan O'Gara
Aidan O'Gara
Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models
Added reference for McKenzie 2023; updated acknowledgements
null
null
null
cs.CL cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
Are current language models capable of deception and lie detection? We study this question by introducing a text-based game called $\textit{Hoodwinked}$, inspired by Mafia and Among Us. Players are locked in a house and must find a key to escape, but one player is tasked with killing the others. Each time a murder is committed, the surviving players have a natural language discussion then vote to banish one player from the game. We conduct experiments with agents controlled by GPT-3, GPT-3.5, and GPT-4 and find evidence of deception and lie detection capabilities. The killer often denies their crime and accuses others, leading to measurable effects on voting outcomes. More advanced models are more effective killers, outperforming smaller models in 18 of 24 pairwise comparisons. Secondary metrics provide evidence that this improvement is not mediated by different actions, but rather by stronger persuasive skills during discussions. To evaluate the ability of AI agents to deceive humans, we make this game publicly available at h https://hoodwinked.ai/ .
[ { "created": "Wed, 5 Jul 2023 17:22:09 GMT", "version": "v1" }, { "created": "Fri, 4 Aug 2023 00:57:06 GMT", "version": "v2" } ]
2023-08-07
[ [ "O'Gara", "Aidan", "" ] ]
Are current language models capable of deception and lie detection? We study this question by introducing a text-based game called $\textit{Hoodwinked}$, inspired by Mafia and Among Us. Players are locked in a house and must find a key to escape, but one player is tasked with killing the others. Each time a murder is committed, the surviving players have a natural language discussion then vote to banish one player from the game. We conduct experiments with agents controlled by GPT-3, GPT-3.5, and GPT-4 and find evidence of deception and lie detection capabilities. The killer often denies their crime and accuses others, leading to measurable effects on voting outcomes. More advanced models are more effective killers, outperforming smaller models in 18 of 24 pairwise comparisons. Secondary metrics provide evidence that this improvement is not mediated by different actions, but rather by stronger persuasive skills during discussions. To evaluate the ability of AI agents to deceive humans, we make this game publicly available at h https://hoodwinked.ai/ .
2007.09998
Pranay Pasula
Pranay Pasula
Lagrangian Duality in Reinforcement Learning
8 pages, 0 figures; fixed typo in abstract
null
null
null
cs.LG cs.AI math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although duality is used extensively in certain fields, such as supervised learning in machine learning, it has been much less explored in others, such as reinforcement learning (RL). In this paper, we show how duality is involved in a variety of RL work, from that which spearheaded the field, such as Richard Bellman's value iteration, to that which was done within just the past few years yet has already had significant impact, such as TRPO, A3C, and GAIL. We show that duality is not uncommon in reinforcement learning, especially when value iteration, or dynamic programming, is used or when first or second order approximations are made to transform initially intractable problems into tractable convex programs.
[ { "created": "Mon, 20 Jul 2020 10:55:12 GMT", "version": "v1" }, { "created": "Tue, 21 Jul 2020 01:01:50 GMT", "version": "v2" }, { "created": "Sat, 25 Jul 2020 01:17:10 GMT", "version": "v3" } ]
2020-07-28
[ [ "Pasula", "Pranay", "" ] ]
Although duality is used extensively in certain fields, such as supervised learning in machine learning, it has been much less explored in others, such as reinforcement learning (RL). In this paper, we show how duality is involved in a variety of RL work, from that which spearheaded the field, such as Richard Bellman's value iteration, to that which was done within just the past few years yet has already had significant impact, such as TRPO, A3C, and GAIL. We show that duality is not uncommon in reinforcement learning, especially when value iteration, or dynamic programming, is used or when first or second order approximations are made to transform initially intractable problems into tractable convex programs.
2310.09394
Jihong Park
Jinhyuk Choi, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim
Semantics Alignment via Split Learning for Resilient Multi-User Semantic Communication
5 pages, 4 figures, 1 table, submitted to the IEEE for possible publication
null
null
null
cs.LG cs.AI cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies on semantic communication commonly rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC). Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics. On the flip side, each neural transceiver is inherently biased towards specific source data and channels, making different transceivers difficult to understand intended semantics, particularly upon their initial encounter. To align semantics over multiple neural transceivers, we propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques. In this method, referred to as SL with layer freezing (SLF), each encoder downloads a misaligned decoder, and locally fine-tunes a fraction of these encoder-decoder NN layers. By adjusting this fraction, SLF controls computing and communication costs. Simulation results confirm the effectiveness of SLF in aligning semantics under different source data and channel dissimilarities, in terms of classification accuracy, reconstruction errors, and recovery time for comprehending intended semantics from misalignment.
[ { "created": "Fri, 13 Oct 2023 20:29:55 GMT", "version": "v1" } ]
2023-10-17
[ [ "Choi", "Jinhyuk", "" ], [ "Park", "Jihong", "" ], [ "Ko", "Seung-Woo", "" ], [ "Choi", "Jinho", "" ], [ "Bennis", "Mehdi", "" ], [ "Kim", "Seong-Lyun", "" ] ]
Recent studies on semantic communication commonly rely on neural network (NN) based transceivers such as deep joint source and channel coding (DeepJSCC). Unlike traditional transceivers, these neural transceivers are trainable using actual source data and channels, enabling them to extract and communicate semantics. On the flip side, each neural transceiver is inherently biased towards specific source data and channels, making different transceivers difficult to understand intended semantics, particularly upon their initial encounter. To align semantics over multiple neural transceivers, we propose a distributed learning based solution, which leverages split learning (SL) and partial NN fine-tuning techniques. In this method, referred to as SL with layer freezing (SLF), each encoder downloads a misaligned decoder, and locally fine-tunes a fraction of these encoder-decoder NN layers. By adjusting this fraction, SLF controls computing and communication costs. Simulation results confirm the effectiveness of SLF in aligning semantics under different source data and channel dissimilarities, in terms of classification accuracy, reconstruction errors, and recovery time for comprehending intended semantics from misalignment.
1702.08017
Borja Balle
Borja Balle, Pascale Gourdeau, Prakash Panangaden
Bisimulation Metrics for Weighted Automata
null
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a new bisimulation (pseudo)metric for weighted finite automata (WFA) that generalizes Boreale's linear bisimulation relation. Our metrics are induced by seminorms on the state space of WFA. Our development is based on spectral properties of sets of linear operators. In particular, the joint spectral radius of the transition matrices of WFA plays a central role. We also study continuity properties of the bisimulation pseudometric, establish an undecidability result for computing the metric, and give a preliminary account of applications to spectral learning of weighted automata.
[ { "created": "Sun, 26 Feb 2017 10:31:28 GMT", "version": "v1" }, { "created": "Sun, 14 May 2017 07:53:06 GMT", "version": "v2" } ]
2017-05-16
[ [ "Balle", "Borja", "" ], [ "Gourdeau", "Pascale", "" ], [ "Panangaden", "Prakash", "" ] ]
We develop a new bisimulation (pseudo)metric for weighted finite automata (WFA) that generalizes Boreale's linear bisimulation relation. Our metrics are induced by seminorms on the state space of WFA. Our development is based on spectral properties of sets of linear operators. In particular, the joint spectral radius of the transition matrices of WFA plays a central role. We also study continuity properties of the bisimulation pseudometric, establish an undecidability result for computing the metric, and give a preliminary account of applications to spectral learning of weighted automata.
2402.02648
Jinwoo Ahn
Jinwoo Ahn, Kyuseung Shin
Recursive Chain-of-Feedback Prevents Performance Degradation from Redundant Prompting
Still Ongoing Work; 8 Pages; 2 Figures
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) frequently struggle with complex reasoning tasks, failing to construct logically sound steps towards the solution. In response to this behavior, users often try prompting the LLMs repeatedly in hopes of reaching a better response. This paper studies such repetitive behavior and its effect by defining a novel setting, Chain-of-Feedback (CoF). The setting takes questions that require multi-step reasoning as an input. Upon response, we repetitively prompt meaningless feedback (e.g. 'make another attempt') requesting additional trials. Surprisingly, our preliminary results show that repeated meaningless feedback gradually decreases the quality of the responses, eventually leading to a larger deviation from the intended outcome. To alleviate these troubles, we propose a novel method, Recursive Chain-of-Feedback (R-CoF). Following the logic of recursion in computer science, R-CoF recursively revises the initially incorrect response by breaking down each incorrect reasoning step into smaller individual problems. Our preliminary results show that majority of questions that LLMs fail to respond correctly can be answered using R-CoF without any sample data outlining the logical process.
[ { "created": "Mon, 5 Feb 2024 00:44:28 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 10:46:01 GMT", "version": "v2" } ]
2024-03-04
[ [ "Ahn", "Jinwoo", "" ], [ "Shin", "Kyuseung", "" ] ]
Large Language Models (LLMs) frequently struggle with complex reasoning tasks, failing to construct logically sound steps towards the solution. In response to this behavior, users often try prompting the LLMs repeatedly in hopes of reaching a better response. This paper studies such repetitive behavior and its effect by defining a novel setting, Chain-of-Feedback (CoF). The setting takes questions that require multi-step reasoning as an input. Upon response, we repetitively prompt meaningless feedback (e.g. 'make another attempt') requesting additional trials. Surprisingly, our preliminary results show that repeated meaningless feedback gradually decreases the quality of the responses, eventually leading to a larger deviation from the intended outcome. To alleviate these troubles, we propose a novel method, Recursive Chain-of-Feedback (R-CoF). Following the logic of recursion in computer science, R-CoF recursively revises the initially incorrect response by breaking down each incorrect reasoning step into smaller individual problems. Our preliminary results show that majority of questions that LLMs fail to respond correctly can be answered using R-CoF without any sample data outlining the logical process.
2309.13496
Liz Izhikevich
Jack Cable, Drew Gregory, Liz Izhikevich, Zakir Durumeric
Stratosphere: Finding Vulnerable Cloud Storage Buckets
Proceedings of the 24th International Symposium on Research in Attacks, Intrusions and Defenses. 2021
null
10.1145/3471621.3473500
null
cs.CR cs.NI
http://creativecommons.org/licenses/by/4.0/
Misconfigured cloud storage buckets have leaked hundreds of millions of medical, voter, and customer records. These breaches are due to a combination of easily-guessable bucket names and error-prone security configurations, which, together, allow attackers to easily guess and access sensitive data. In this work, we investigate the security of buckets, finding that prior studies have largely underestimated cloud insecurity by focusing on simple, easy-to-guess names. By leveraging prior work in the password analysis space, we introduce Stratosphere, a system that learns how buckets are named in practice in order to efficiently guess the names of vulnerable buckets. Using Stratosphere, we find wide-spread exploitation of buckets and vulnerable configurations continuing to increase over the years. We conclude with recommendations for operators, researchers, and cloud providers.
[ { "created": "Sat, 23 Sep 2023 23:27:19 GMT", "version": "v1" } ]
2023-09-26
[ [ "Cable", "Jack", "" ], [ "Gregory", "Drew", "" ], [ "Izhikevich", "Liz", "" ], [ "Durumeric", "Zakir", "" ] ]
Misconfigured cloud storage buckets have leaked hundreds of millions of medical, voter, and customer records. These breaches are due to a combination of easily-guessable bucket names and error-prone security configurations, which, together, allow attackers to easily guess and access sensitive data. In this work, we investigate the security of buckets, finding that prior studies have largely underestimated cloud insecurity by focusing on simple, easy-to-guess names. By leveraging prior work in the password analysis space, we introduce Stratosphere, a system that learns how buckets are named in practice in order to efficiently guess the names of vulnerable buckets. Using Stratosphere, we find wide-spread exploitation of buckets and vulnerable configurations continuing to increase over the years. We conclude with recommendations for operators, researchers, and cloud providers.
1902.00545
Philipp Seifer
Philipp Seifer (University of Koblenz-Landau, Germany), Martin Leinberger (University of Koblenz-Landau, Germany), Ralf L\"ammel (University of Koblenz-Landau, Germany), Steffen Staab (University of Koblenz-Landau and University of Southampton, Germany)
Semantic Query Integration With Reason
null
The Art, Science, and Engineering of Programming, 2019, Vol. 3, Issue 3, Article 13
10.22152/programming-journal.org/2019/3/13
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-based data models allow for flexible data representation. In particular, semantic data based on RDF and OWL fuels use cases ranging from general knowledge graphs to domain specific knowledge in various technological or scientific domains. The flexibility of such approaches, however, makes programming with semantic data tedious and error-prone. In particular the logics-based data descriptions employed by OWL are problematic for existing error-detecting techniques, such as type systems. In this paper, we present DOTSpa, an advanced integration of semantic data into programming. We embed description logics, the logical foundations of OWL, into the type checking process of a statically typed programming language and provide typed data access through an embedding of the query language SPARQL. In addition, we demonstrate a concrete implementation of the approach, by extending the Scala programming language. We qualitatively compare programs using our approach to equivalent programs using a state-of-the-art library, in terms of how both frameworks aid users in the handling of typical failure scenarios.
[ { "created": "Fri, 1 Feb 2019 20:16:13 GMT", "version": "v1" }, { "created": "Tue, 5 Feb 2019 13:55:11 GMT", "version": "v2" } ]
2019-02-06
[ [ "Seifer", "Philipp", "", "University of Koblenz-Landau, Germany" ], [ "Leinberger", "Martin", "", "University of Koblenz-Landau, Germany" ], [ "Lämmel", "Ralf", "", "University\n of Koblenz-Landau, Germany" ], [ "Staab", "Steffen", "", "University of Koblenz-Landau and\n University of Southampton, Germany" ] ]
Graph-based data models allow for flexible data representation. In particular, semantic data based on RDF and OWL fuels use cases ranging from general knowledge graphs to domain specific knowledge in various technological or scientific domains. The flexibility of such approaches, however, makes programming with semantic data tedious and error-prone. In particular the logics-based data descriptions employed by OWL are problematic for existing error-detecting techniques, such as type systems. In this paper, we present DOTSpa, an advanced integration of semantic data into programming. We embed description logics, the logical foundations of OWL, into the type checking process of a statically typed programming language and provide typed data access through an embedding of the query language SPARQL. In addition, we demonstrate a concrete implementation of the approach, by extending the Scala programming language. We qualitatively compare programs using our approach to equivalent programs using a state-of-the-art library, in terms of how both frameworks aid users in the handling of typical failure scenarios.
1411.0028
Ivan Soprunov
Ivan Soprunov
Lattice polytopes in coding theory
11 pages, 3 figures
J. Algebra Comb. Discrete Appl., 2(2) pp.85-94 (2015)
10.13069/jacodesmath.75353
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also prove a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.
[ { "created": "Fri, 31 Oct 2014 21:21:05 GMT", "version": "v1" } ]
2015-06-26
[ [ "Soprunov", "Ivan", "" ] ]
In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also prove a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.
0804.1302
Francis Bach
Francis Bach (INRIA Rocquencourt)
Bolasso: model consistent Lasso estimation through the bootstrap
null
null
null
null
cs.LG math.ST stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.
[ { "created": "Tue, 8 Apr 2008 15:40:03 GMT", "version": "v1" } ]
2008-12-18
[ [ "Bach", "Francis", "", "INRIA Rocquencourt" ] ]
We consider the least-square linear regression problem with regularization by the l1-norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository.
1706.05048
Ali Borji
Ali Borji and Aysegul Dundar
Human-like Clustering with Deep Convolutional Neural Networks
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classification and clustering have been studied separately in machine learning and computer vision. Inspired by the recent success of deep learning models in solving various vision problems (e.g., object recognition, semantic segmentation) and the fact that humans serve as the gold standard in assessing clustering algorithms, here, we advocate for a unified treatment of the two problems and suggest that hierarchical frameworks that progressively build complex patterns on top of the simpler ones (e.g., convolutional neural networks) offer a promising solution. We do not dwell much on the learning mechanisms in these frameworks as they are still a matter of debate, with respect to biological constraints. Instead, we emphasize on the compositionality of the real world structures and objects. In particular, we show that CNNs, trained end to end using back propagation with noisy labels, are able to cluster data points belonging to several overlapping shapes, and do so much better than the state of the art algorithms. The main takeaway lesson from our study is that mechanisms of human vision, particularly the hierarchal organization of the visual ventral stream should be taken into account in clustering algorithms (e.g., for learning representations in an unsupervised manner or with minimum supervision) to reach human level clustering performance. This, by no means, suggests that other methods do not hold merits. For example, methods relying on pairwise affinities (e.g., spectral clustering) have been very successful in many scenarios but still fail in some cases (e.g., overlapping clusters).
[ { "created": "Thu, 15 Jun 2017 19:10:50 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2017 23:45:26 GMT", "version": "v2" } ]
2017-12-13
[ [ "Borji", "Ali", "" ], [ "Dundar", "Aysegul", "" ] ]
Classification and clustering have been studied separately in machine learning and computer vision. Inspired by the recent success of deep learning models in solving various vision problems (e.g., object recognition, semantic segmentation) and the fact that humans serve as the gold standard in assessing clustering algorithms, here, we advocate for a unified treatment of the two problems and suggest that hierarchical frameworks that progressively build complex patterns on top of the simpler ones (e.g., convolutional neural networks) offer a promising solution. We do not dwell much on the learning mechanisms in these frameworks as they are still a matter of debate, with respect to biological constraints. Instead, we emphasize on the compositionality of the real world structures and objects. In particular, we show that CNNs, trained end to end using back propagation with noisy labels, are able to cluster data points belonging to several overlapping shapes, and do so much better than the state of the art algorithms. The main takeaway lesson from our study is that mechanisms of human vision, particularly the hierarchal organization of the visual ventral stream should be taken into account in clustering algorithms (e.g., for learning representations in an unsupervised manner or with minimum supervision) to reach human level clustering performance. This, by no means, suggests that other methods do not hold merits. For example, methods relying on pairwise affinities (e.g., spectral clustering) have been very successful in many scenarios but still fail in some cases (e.g., overlapping clusters).
2408.01571
Matan Atad
Matan Atad, David Schinz, Hendrik Moeller, Robert Graf, Benedikt Wiestler, Daniel Rueckert, Nassir Navab, Jan S. Kirschke, Matthias Keicher
Counterfactual Explanations for Medical Image Classification and Regression using Diffusion Autoencoder
In submission. arXiv admin note: text overlap with arXiv:2303.12031
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Counterfactual explanations (CEs) aim to enhance the interpretability of machine learning models by illustrating how alterations in input features would affect the resulting predictions. Common CE approaches require an additional model and are typically constrained to binary counterfactuals. In contrast, we propose a novel method that operates directly on the latent space of a generative model, specifically a Diffusion Autoencoder (DAE). This approach offers inherent interpretability by enabling the generation of CEs and the continuous visualization of the model's internal representation across decision boundaries. Our method leverages the DAE's ability to encode images into a semantically rich latent space in an unsupervised manner, eliminating the need for labeled data or separate feature extraction models. We show that these latent representations are helpful for medical condition classification and the ordinal regression of severity pathologies, such as vertebral compression fractures (VCF) and diabetic retinopathy (DR). Beyond binary CEs, our method supports the visualization of ordinal CEs using a linear model, providing deeper insights into the model's decision-making process and enhancing interpretability. Experiments across various medical imaging datasets demonstrate the method's advantages in interpretability and versatility. The linear manifold of the DAE's latent space allows for meaningful interpolation and manipulation, making it a powerful tool for exploring medical image properties. Our code is available at https://github.com/matanat/dae_counterfactual.
[ { "created": "Fri, 2 Aug 2024 21:01:30 GMT", "version": "v1" } ]
2024-08-06
[ [ "Atad", "Matan", "" ], [ "Schinz", "David", "" ], [ "Moeller", "Hendrik", "" ], [ "Graf", "Robert", "" ], [ "Wiestler", "Benedikt", "" ], [ "Rueckert", "Daniel", "" ], [ "Navab", "Nassir", "" ], [ "Kirschke", "Jan S.", "" ], [ "Keicher", "Matthias", "" ] ]
Counterfactual explanations (CEs) aim to enhance the interpretability of machine learning models by illustrating how alterations in input features would affect the resulting predictions. Common CE approaches require an additional model and are typically constrained to binary counterfactuals. In contrast, we propose a novel method that operates directly on the latent space of a generative model, specifically a Diffusion Autoencoder (DAE). This approach offers inherent interpretability by enabling the generation of CEs and the continuous visualization of the model's internal representation across decision boundaries. Our method leverages the DAE's ability to encode images into a semantically rich latent space in an unsupervised manner, eliminating the need for labeled data or separate feature extraction models. We show that these latent representations are helpful for medical condition classification and the ordinal regression of severity pathologies, such as vertebral compression fractures (VCF) and diabetic retinopathy (DR). Beyond binary CEs, our method supports the visualization of ordinal CEs using a linear model, providing deeper insights into the model's decision-making process and enhancing interpretability. Experiments across various medical imaging datasets demonstrate the method's advantages in interpretability and versatility. The linear manifold of the DAE's latent space allows for meaningful interpolation and manipulation, making it a powerful tool for exploring medical image properties. Our code is available at https://github.com/matanat/dae_counterfactual.
1708.07888
Wei Chen
Wei Chen, Mark Fuge
Active Expansion Sampling for Learning Feasible Domains in an Unbounded Input Space
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm and the straddle heuristic -- that operate over fixed input variable bounds.
[ { "created": "Fri, 25 Aug 2017 21:12:40 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2017 22:19:23 GMT", "version": "v2" }, { "created": "Sat, 20 Jan 2018 19:29:56 GMT", "version": "v3" } ]
2018-01-23
[ [ "Chen", "Wei", "" ], [ "Fuge", "Mark", "" ] ]
Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -- the Neighborhood-Voronoi algorithm and the straddle heuristic -- that operate over fixed input variable bounds.
1911.01015
David Schubert
David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko and Daniel Cremers
Rolling-Shutter Modelling for Direct Visual-Inertial Odometry
null
null
10.1109/IROS40897.2019.8968539
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a direct visual-inertial odometry (VIO) method which estimates the motion of the sensor setup and sparse 3D geometry of the environment based on measurements from a rolling-shutter camera and an inertial measurement unit (IMU). The visual part of the system performs a photometric bundle adjustment on a sparse set of points. This direct approach does not extract feature points and is able to track not only corners, but any pixels with sufficient gradient magnitude. Neglecting rolling-shutter effects in the visual part severely degrades accuracy and robustness of the system. In this paper, we incorporate a rolling-shutter model into the photometric bundle adjustment that estimates a set of recent keyframe poses and the inverse depth of a sparse set of points. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between selected keyframes. For every keyframe we estimate not only the pose but also velocity and biases to correct the IMU measurements. Unlike systems with global-shutter cameras, we use both IMU measurements and rolling-shutter effects of the camera to estimate velocity and biases for every state. Last, we evaluate our system on a novel dataset that contains global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences, which we make publicly available. Evaluation shows that the proposed method outperforms a system where rolling shutter is not modelled and achieves similar accuracy to the global-shutter method on global-shutter data.
[ { "created": "Mon, 4 Nov 2019 02:54:15 GMT", "version": "v1" } ]
2020-06-18
[ [ "Schubert", "David", "" ], [ "Demmel", "Nikolaus", "" ], [ "von Stumberg", "Lukas", "" ], [ "Usenko", "Vladyslav", "" ], [ "Cremers", "Daniel", "" ] ]
We present a direct visual-inertial odometry (VIO) method which estimates the motion of the sensor setup and sparse 3D geometry of the environment based on measurements from a rolling-shutter camera and an inertial measurement unit (IMU). The visual part of the system performs a photometric bundle adjustment on a sparse set of points. This direct approach does not extract feature points and is able to track not only corners, but any pixels with sufficient gradient magnitude. Neglecting rolling-shutter effects in the visual part severely degrades accuracy and robustness of the system. In this paper, we incorporate a rolling-shutter model into the photometric bundle adjustment that estimates a set of recent keyframe poses and the inverse depth of a sparse set of points. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between selected keyframes. For every keyframe we estimate not only the pose but also velocity and biases to correct the IMU measurements. Unlike systems with global-shutter cameras, we use both IMU measurements and rolling-shutter effects of the camera to estimate velocity and biases for every state. Last, we evaluate our system on a novel dataset that contains global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences, which we make publicly available. Evaluation shows that the proposed method outperforms a system where rolling shutter is not modelled and achieves similar accuracy to the global-shutter method on global-shutter data.
2309.12864
Lorenzo Carletti
Lorenzo Carletti, Gianluca Brilli, Alessandro Capotondi, Paolo Valente, Andrea Marongiu
The Importance of Worst-Case Memory Contention Analysis for Heterogeneous SoCs
Accepted for presentation at the CPS workshop 2023 (http://www.cpsschool.eu/cps-workshop)
null
null
null
cs.PF
http://creativecommons.org/licenses/by/4.0/
Memory interference may heavily inflate task execution times in Heterogeneous Systems-on-Chips (HeSoCs). Knowing worst-case interference is consequently fundamental for supporting the correct execution of time-sensitive applications. In most of the literature, worst-case interference is assumed to be generated by, and therefore is estimated through read-intensive synthetic workloads with no caching. Yet these workloads do not always generate worst-case interference. This is the consequence of the general results reported in this work. By testing on multiple architectures, we determined that the highest interference generation traffic pattern is actually hardware dependant, and that making assumptions could lead to a severe underestimation of the worst-case (in our case, of more than 9x).
[ { "created": "Fri, 22 Sep 2023 13:38:25 GMT", "version": "v1" } ]
2023-09-25
[ [ "Carletti", "Lorenzo", "" ], [ "Brilli", "Gianluca", "" ], [ "Capotondi", "Alessandro", "" ], [ "Valente", "Paolo", "" ], [ "Marongiu", "Andrea", "" ] ]
Memory interference may heavily inflate task execution times in Heterogeneous Systems-on-Chips (HeSoCs). Knowing worst-case interference is consequently fundamental for supporting the correct execution of time-sensitive applications. In most of the literature, worst-case interference is assumed to be generated by, and therefore is estimated through read-intensive synthetic workloads with no caching. Yet these workloads do not always generate worst-case interference. This is the consequence of the general results reported in this work. By testing on multiple architectures, we determined that the highest interference generation traffic pattern is actually hardware dependant, and that making assumptions could lead to a severe underestimation of the worst-case (in our case, of more than 9x).
1403.6968
Milos Nikolic
Milos Nikolic, Mohammed ElSeidy, Christoph Koch
LINVIEW: Incremental View Maintenance for Complex Analytical Queries
14 pages, SIGMOD
null
10.1145/2588555.2610519
null
cs.DB cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many analytics tasks and machine learning problems can be naturally expressed by iterative linear algebra programs. In this paper, we study the incremental view maintenance problem for such complex analytical queries. We develop a framework, called LINVIEW, for capturing deltas of linear algebra programs and understanding their computational cost. Linear algebra operations tend to cause an avalanche effect where even very local changes to the input matrices spread out and infect all of the intermediate results and the final view, causing incremental view maintenance to lose its performance benefit over re-evaluation. We develop techniques based on matrix factorizations to contain such epidemics of change. As a consequence, our techniques make incremental view maintenance of linear algebra practical and usually substantially cheaper than re-evaluation. We show, both analytically and experimentally, the usefulness of these techniques when applied to standard analytics tasks. Our evaluation demonstrates the efficiency of LINVIEW in generating parallel incremental programs that outperform re-evaluation techniques by more than an order of magnitude.
[ { "created": "Thu, 27 Mar 2014 10:22:32 GMT", "version": "v1" }, { "created": "Fri, 9 May 2014 10:54:26 GMT", "version": "v2" } ]
2014-05-12
[ [ "Nikolic", "Milos", "" ], [ "ElSeidy", "Mohammed", "" ], [ "Koch", "Christoph", "" ] ]
Many analytics tasks and machine learning problems can be naturally expressed by iterative linear algebra programs. In this paper, we study the incremental view maintenance problem for such complex analytical queries. We develop a framework, called LINVIEW, for capturing deltas of linear algebra programs and understanding their computational cost. Linear algebra operations tend to cause an avalanche effect where even very local changes to the input matrices spread out and infect all of the intermediate results and the final view, causing incremental view maintenance to lose its performance benefit over re-evaluation. We develop techniques based on matrix factorizations to contain such epidemics of change. As a consequence, our techniques make incremental view maintenance of linear algebra practical and usually substantially cheaper than re-evaluation. We show, both analytically and experimentally, the usefulness of these techniques when applied to standard analytics tasks. Our evaluation demonstrates the efficiency of LINVIEW in generating parallel incremental programs that outperform re-evaluation techniques by more than an order of magnitude.
0807.0644
Neal E. Young
Christos Koufogiannakis and Neal E. Young
Greedy D-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost
null
Algorithmica 66(1):113-152 (2013)
10.1007/978-3-642-02927-1_53
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering constraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems.
[ { "created": "Fri, 4 Jul 2008 23:31:29 GMT", "version": "v1" }, { "created": "Thu, 20 Nov 2008 01:26:09 GMT", "version": "v2" }, { "created": "Fri, 8 May 2009 02:11:42 GMT", "version": "v3" }, { "created": "Fri, 30 Dec 2011 17:40:35 GMT", "version": "v4" } ]
2015-06-02
[ [ "Koufogiannakis", "Christos", "" ], [ "Young", "Neal E.", "" ] ]
This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering constraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems.
1611.07593
Ziming Zhang
Ziming Zhang and Venkatesh Saligrama
Learning Joint Feature Adaptation for Zero-Shot Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zero-shot recognition (ZSR) aims to recognize target-domain data instances of unseen classes based on the models learned from associated pairs of seen-class source and target domain data. One of the key challenges in ZSR is the relative scarcity of source-domain features (e.g. one feature vector per class), which do not fully account for wide variability in target-domain instances. In this paper we propose a novel framework of learning data-dependent feature transforms for scoring similarity between an arbitrary pair of source and target data instances to account for the wide variability in target domain. Our proposed approach is based on optimizing over a parameterized family of local feature displacements that maximize the source-target adaptive similarity functions. Accordingly we propose formulating zero-shot learning (ZSL) using latent structural SVMs to learn our similarity functions from training data. As demonstration we design a specific algorithm under the proposed framework involving bilinear similarity functions and regularized least squares as penalties for feature displacement. We test our approach on several benchmark datasets for ZSR and show significant improvement over the state-of-the-art. For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition accuracy, outperforming the state-of-the-art by 11.15%.
[ { "created": "Wed, 23 Nov 2016 01:13:37 GMT", "version": "v1" }, { "created": "Sat, 3 Dec 2016 03:17:02 GMT", "version": "v2" } ]
2016-12-06
[ [ "Zhang", "Ziming", "" ], [ "Saligrama", "Venkatesh", "" ] ]
Zero-shot recognition (ZSR) aims to recognize target-domain data instances of unseen classes based on the models learned from associated pairs of seen-class source and target domain data. One of the key challenges in ZSR is the relative scarcity of source-domain features (e.g. one feature vector per class), which do not fully account for wide variability in target-domain instances. In this paper we propose a novel framework of learning data-dependent feature transforms for scoring similarity between an arbitrary pair of source and target data instances to account for the wide variability in target domain. Our proposed approach is based on optimizing over a parameterized family of local feature displacements that maximize the source-target adaptive similarity functions. Accordingly we propose formulating zero-shot learning (ZSL) using latent structural SVMs to learn our similarity functions from training data. As demonstration we design a specific algorithm under the proposed framework involving bilinear similarity functions and regularized least squares as penalties for feature displacement. We test our approach on several benchmark datasets for ZSR and show significant improvement over the state-of-the-art. For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition accuracy, outperforming the state-of-the-art by 11.15%.
1412.8615
Sumedh Tirodkar
Ashish Chiplunkar, Sumedh Tirodkar, Sundar Vishwanathan
On Randomized Algorithms for Matching in the Online Preemptive Model
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the power of randomized algorithms for the maximum cardinality matching (MCM) and the maximum weight matching (MWM) problems in the online preemptive model. In this model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded. The complexity of the problem is settled for deterministic algorithms. Almost nothing is known for randomized algorithms. A lower bound of $1.693$ is known for MCM with a trivial upper bound of $2$. An upper bound of $5.356$ is known for MWM. We initiate a systematic study of the same in this paper with an aim to isolate and understand the difficulty. We begin with a primal-dual analysis of the deterministic algorithm due to McGregor. All deterministic lower bounds are on instances which are trees at every step. For this class of (unweighted) graphs we present a randomized algorithm which is $\frac{28}{15}$-competitive. The analysis is a considerable extension of the (simple) primal-dual analysis for the deterministic case. The key new technique is that the distribution of primal charge to dual variables depends on the "neighborhood" and needs to be done after having seen the entire input. The assignment is asymmetric: in that edges may assign different charges to the two end-points. Also the proof depends on a non-trivial structural statement on the performance of the algorithm on the input tree. The other main result of this paper is an extension of the deterministic lower bound of Varadaraja to a natural class of randomized algorithms which decide whether to accept a new edge or not using independent random choices.
[ { "created": "Tue, 30 Dec 2014 12:21:06 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2015 12:06:58 GMT", "version": "v2" } ]
2015-07-03
[ [ "Chiplunkar", "Ashish", "" ], [ "Tirodkar", "Sumedh", "" ], [ "Vishwanathan", "Sundar", "" ] ]
We investigate the power of randomized algorithms for the maximum cardinality matching (MCM) and the maximum weight matching (MWM) problems in the online preemptive model. In this model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded. The complexity of the problem is settled for deterministic algorithms. Almost nothing is known for randomized algorithms. A lower bound of $1.693$ is known for MCM with a trivial upper bound of $2$. An upper bound of $5.356$ is known for MWM. We initiate a systematic study of the same in this paper with an aim to isolate and understand the difficulty. We begin with a primal-dual analysis of the deterministic algorithm due to McGregor. All deterministic lower bounds are on instances which are trees at every step. For this class of (unweighted) graphs we present a randomized algorithm which is $\frac{28}{15}$-competitive. The analysis is a considerable extension of the (simple) primal-dual analysis for the deterministic case. The key new technique is that the distribution of primal charge to dual variables depends on the "neighborhood" and needs to be done after having seen the entire input. The assignment is asymmetric: in that edges may assign different charges to the two end-points. Also the proof depends on a non-trivial structural statement on the performance of the algorithm on the input tree. The other main result of this paper is an extension of the deterministic lower bound of Varadaraja to a natural class of randomized algorithms which decide whether to accept a new edge or not using independent random choices.
1710.09026
Markus Kliegl
Markus Kliegl, Siddharth Goyal, Kexin Zhao, Kavya Srinet, Mohammad Shoeybi
Trace norm regularization and faster inference for embedded speech recognition RNNs
Our optimized inference kernels are available at: https://github.com/PaddlePaddle/farm (Note: This paper was submitted to, but rejected from, ICLR 2018. We believe it may still be of value to others. Please see the discussion here: https://openreview.net/forum?id=B1tC-LT6W)
null
null
null
cs.LG cs.CL eess.AS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.
[ { "created": "Wed, 25 Oct 2017 00:20:55 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2018 10:00:10 GMT", "version": "v2" } ]
2018-02-07
[ [ "Kliegl", "Markus", "" ], [ "Goyal", "Siddharth", "" ], [ "Zhao", "Kexin", "" ], [ "Srinet", "Kavya", "" ], [ "Shoeybi", "Mohammad", "" ] ]
We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.
2006.00996
Lucas Deecke
Lucas Deecke, Timothy Hospedales, Hakan Bilen
Latent Domain Learning with Dynamic Residual Adapters
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A practical shortcoming of deep neural networks is their specialization to a single task and domain. While recent techniques in domain adaptation and multi-domain learning enable the learning of more domain-agnostic features, their success relies on the presence of domain labels, typically requiring manual annotation and careful curation of datasets. Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations. In this scenario, standard model training leads to the overfitting of large domains, while disregarding smaller ones. We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains, coupled with an augmentation strategy inspired by recent style transfer techniques. Our proposed approach is examined on image classification tasks containing multiple latent domains, and we showcase its ability to obtain robust performance across these. Dynamic residual adapters significantly outperform off-the-shelf networks with much larger capacity, and can be incorporated seamlessly with existing architectures in an end-to-end manner.
[ { "created": "Mon, 1 Jun 2020 15:00:11 GMT", "version": "v1" } ]
2020-06-02
[ [ "Deecke", "Lucas", "" ], [ "Hospedales", "Timothy", "" ], [ "Bilen", "Hakan", "" ] ]
A practical shortcoming of deep neural networks is their specialization to a single task and domain. While recent techniques in domain adaptation and multi-domain learning enable the learning of more domain-agnostic features, their success relies on the presence of domain labels, typically requiring manual annotation and careful curation of datasets. Here we focus on a less explored, but more realistic case: learning from data from multiple domains, without access to domain annotations. In this scenario, standard model training leads to the overfitting of large domains, while disregarding smaller ones. We address this limitation via dynamic residual adapters, an adaptive gating mechanism that helps account for latent domains, coupled with an augmentation strategy inspired by recent style transfer techniques. Our proposed approach is examined on image classification tasks containing multiple latent domains, and we showcase its ability to obtain robust performance across these. Dynamic residual adapters significantly outperform off-the-shelf networks with much larger capacity, and can be incorporated seamlessly with existing architectures in an end-to-end manner.
2407.13068
Ying Song
Ying Song, Rita Singh, Balaji Palanisamy
Krait: A Backdoor Attack Against Graph Prompt Tuning
Previously submitted to CCS on 04/29
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph prompt tuning has emerged as a promising paradigm to effectively transfer general graph knowledge from pre-trained models to various downstream tasks, particularly in few-shot contexts. However, its susceptibility to backdoor attacks, where adversaries insert triggers to manipulate outcomes, raises a critical concern. We conduct the first study to investigate such vulnerability, revealing that backdoors can disguise benign graph prompts, thus evading detection. We introduce Krait, a novel graph prompt backdoor. Specifically, we propose a simple yet effective model-agnostic metric called label non-uniformity homophily to select poisoned candidates, significantly reducing computational complexity. To accommodate diverse attack scenarios and advanced attack types, we design three customizable trigger generation methods to craft prompts as triggers. We propose a novel centroid similarity-based loss function to optimize prompt tuning for attack effectiveness and stealthiness. Experiments on four real-world graphs demonstrate that Krait can efficiently embed triggers to merely 0.15% to 2% of training nodes, achieving high attack success rates without sacrificing clean accuracy. Notably, in one-to-one and all-to-one attacks, Krait can achieve 100% attack success rates by poisoning as few as 2 and 22 nodes, respectively. Our experiments further show that Krait remains potent across different transfer cases, attack types, and graph neural network backbones. Additionally, Krait can be successfully extended to the black-box setting, posing more severe threats. Finally, we analyze why Krait can evade both classical and state-of-the-art defenses, and provide practical insights for detecting and mitigating this class of attacks.
[ { "created": "Thu, 18 Jul 2024 00:25:49 GMT", "version": "v1" } ]
2024-07-19
[ [ "Song", "Ying", "" ], [ "Singh", "Rita", "" ], [ "Palanisamy", "Balaji", "" ] ]
Graph prompt tuning has emerged as a promising paradigm to effectively transfer general graph knowledge from pre-trained models to various downstream tasks, particularly in few-shot contexts. However, its susceptibility to backdoor attacks, where adversaries insert triggers to manipulate outcomes, raises a critical concern. We conduct the first study to investigate such vulnerability, revealing that backdoors can disguise benign graph prompts, thus evading detection. We introduce Krait, a novel graph prompt backdoor. Specifically, we propose a simple yet effective model-agnostic metric called label non-uniformity homophily to select poisoned candidates, significantly reducing computational complexity. To accommodate diverse attack scenarios and advanced attack types, we design three customizable trigger generation methods to craft prompts as triggers. We propose a novel centroid similarity-based loss function to optimize prompt tuning for attack effectiveness and stealthiness. Experiments on four real-world graphs demonstrate that Krait can efficiently embed triggers to merely 0.15% to 2% of training nodes, achieving high attack success rates without sacrificing clean accuracy. Notably, in one-to-one and all-to-one attacks, Krait can achieve 100% attack success rates by poisoning as few as 2 and 22 nodes, respectively. Our experiments further show that Krait remains potent across different transfer cases, attack types, and graph neural network backbones. Additionally, Krait can be successfully extended to the black-box setting, posing more severe threats. Finally, we analyze why Krait can evade both classical and state-of-the-art defenses, and provide practical insights for detecting and mitigating this class of attacks.
1905.10161
George Moustakides
Kalliopi Basioti and George V. Moustakides
Optimizing Shallow Networks for Binary Classification
null
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data driven classification that relies on neural networks is based on optimization criteria that involve some form of distance between the output of the network and the desired label. Using the same mathematical analysis, for a multitude of such measures, we can show that their optimum solution matches the ideal likelihood ratio test classifier. In this work we introduce a different family of optimization problems which is not covered by the existing approaches and, therefore, opens possibilities for new training algorithms for neural network based classification. We give examples that lead to algorithms that are simple in implementation, exhibit stable convergence characteristics and are antagonistic to the most popular existing techniques.
[ { "created": "Fri, 24 May 2019 11:40:24 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2019 00:42:01 GMT", "version": "v2" } ]
2019-06-25
[ [ "Basioti", "Kalliopi", "" ], [ "Moustakides", "George V.", "" ] ]
Data driven classification that relies on neural networks is based on optimization criteria that involve some form of distance between the output of the network and the desired label. Using the same mathematical analysis, for a multitude of such measures, we can show that their optimum solution matches the ideal likelihood ratio test classifier. In this work we introduce a different family of optimization problems which is not covered by the existing approaches and, therefore, opens possibilities for new training algorithms for neural network based classification. We give examples that lead to algorithms that are simple in implementation, exhibit stable convergence characteristics and are antagonistic to the most popular existing techniques.
2305.15083
Jiahuan Li
Jiahuan Li, Hao Zhou, Shujian Huang, Shanbo Cheng, Jiajun Chen
Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions
accepted by Transaction of ACL, pre-MIT version
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.
[ { "created": "Wed, 24 May 2023 12:00:24 GMT", "version": "v1" }, { "created": "Fri, 30 Jun 2023 02:32:11 GMT", "version": "v2" }, { "created": "Thu, 14 Mar 2024 13:04:49 GMT", "version": "v3" }, { "created": "Mon, 15 Apr 2024 06:02:59 GMT", "version": "v4" } ]
2024-04-16
[ [ "Li", "Jiahuan", "" ], [ "Zhou", "Hao", "" ], [ "Huang", "Shujian", "" ], [ "Cheng", "Shanbo", "" ], [ "Chen", "Jiajun", "" ] ]
Large-scale Pretrained Language Models (LLMs), such as ChatGPT and GPT4, have shown strong abilities in multilingual translations, without being explicitly trained on parallel corpora. It is interesting how the LLMs obtain their ability to carry out translation instructions for different languages. In this paper, we present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation following given instructions. Firstly, we show that multilingual LLMs have stronger translation abilities than previously demonstrated. For a certain language, the performance depends on its similarity to English and the amount of data used in the pretraining phase. Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages. With multilingual finetuning, LLMs could learn to perform the translation task well even for those language pairs unseen during the instruction tuning phase.
2403.05820
YuDong Yang
Yudong Yang, Rongfeng Su, Xiaokang Liu, Nan Yan, and Lan Wang
An Audio-textual Diffusion Model For Converting Speech Signals Into Ultrasound Tongue Imaging Data
ICASSP2024 Accept
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acoustic-to-articulatory inversion (AAI) is to convert audio into articulator movements, such as ultrasound tongue imaging (UTI) data. An issue of existing AAI methods is only using the personalized acoustic information to derive the general patterns of tongue motions, and thus the quality of generated UTI data is limited. To address this issue, this paper proposes an audio-textual diffusion model for the UTI data generation task. In this model, the inherent acoustic characteristics of individuals related to the tongue motion details are encoded by using wav2vec 2.0, while the ASR transcriptions related to the universality of tongue motions are encoded by using BERT. UTI data are then generated by using a diffusion module. Experimental results showed that the proposed diffusion model could generate high-quality UTI data with clear tongue contour that is crucial for the linguistic analysis and clinical assessment. The project can be found on the website\footnote{https://yangyudong2020.github.io/wav2uti/
[ { "created": "Sat, 9 Mar 2024 06:59:47 GMT", "version": "v1" }, { "created": "Tue, 12 Mar 2024 11:26:07 GMT", "version": "v2" } ]
2024-03-13
[ [ "Yang", "Yudong", "" ], [ "Su", "Rongfeng", "" ], [ "Liu", "Xiaokang", "" ], [ "Yan", "Nan", "" ], [ "Wang", "Lan", "" ] ]
Acoustic-to-articulatory inversion (AAI) is to convert audio into articulator movements, such as ultrasound tongue imaging (UTI) data. An issue of existing AAI methods is only using the personalized acoustic information to derive the general patterns of tongue motions, and thus the quality of generated UTI data is limited. To address this issue, this paper proposes an audio-textual diffusion model for the UTI data generation task. In this model, the inherent acoustic characteristics of individuals related to the tongue motion details are encoded by using wav2vec 2.0, while the ASR transcriptions related to the universality of tongue motions are encoded by using BERT. UTI data are then generated by using a diffusion module. Experimental results showed that the proposed diffusion model could generate high-quality UTI data with clear tongue contour that is crucial for the linguistic analysis and clinical assessment. The project can be found on the website\footnote{https://yangyudong2020.github.io/wav2uti/
2103.06769
Pierre-Yves Oudeyer
Manfred Eppe and Pierre-Yves Oudeyer
Intelligent behavior depends on the ecological niche: Scaling up AI to human-like intelligence in socio-cultural environments
Keywords: developmental AI, general artificial intelligence, human-like AI, embodiment, cultural evolution, language, socio-cultural skills
KI - K\"unstliche Intelligenz KI - K\"unstliche Intelligenz (German Journal of Artificial Intelligence), 2021
10.1007/s13218-020-00696-1
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper outlines a perspective on the future of AI, discussing directions for machines models of human-like intelligence. We explain how developmental and evolutionary theories of human cognition should further inform artificial intelligence. We emphasize the role of ecological niches in sculpting intelligent behavior, and in particular that human intelligence was fundamentally shaped to adapt to a constantly changing socio-cultural environment. We argue that a major limit of current work in AI is that it is missing this perspective, both theoretically and experimentally. Finally, we discuss the promising approach of developmental artificial intelligence, modeling infant development through multi-scale interaction between intrinsically motivated learning, embodiment and a fastly changing socio-cultural environment. This paper takes the form of an interview of Pierre-Yves Oudeyer by Mandred Eppe, organized within the context of a KI - K{\"{u}}nstliche Intelligenz special issue in developmental robotics.
[ { "created": "Thu, 11 Mar 2021 16:24:00 GMT", "version": "v1" } ]
2021-03-12
[ [ "Eppe", "Manfred", "" ], [ "Oudeyer", "Pierre-Yves", "" ] ]
This paper outlines a perspective on the future of AI, discussing directions for machines models of human-like intelligence. We explain how developmental and evolutionary theories of human cognition should further inform artificial intelligence. We emphasize the role of ecological niches in sculpting intelligent behavior, and in particular that human intelligence was fundamentally shaped to adapt to a constantly changing socio-cultural environment. We argue that a major limit of current work in AI is that it is missing this perspective, both theoretically and experimentally. Finally, we discuss the promising approach of developmental artificial intelligence, modeling infant development through multi-scale interaction between intrinsically motivated learning, embodiment and a fastly changing socio-cultural environment. This paper takes the form of an interview of Pierre-Yves Oudeyer by Mandred Eppe, organized within the context of a KI - K{\"{u}}nstliche Intelligenz special issue in developmental robotics.
1207.1394
Andreas Krause
Andreas Krause, Carlos E. Guestrin
Near-optimal Nonmyopic Value of Information in Graphical Models
Appears in Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (UAI2005)
null
null
UAI-P-2005-PG-324-331
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental issue in real-world systems, such as sensor networks, is the selection of observations which most effectively reduce uncertainty. More specifically, we address the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model. We present the first efficient randomized algorithm providing a constant factor (1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high confidence. The algorithm leverages the theory of submodular functions, in combination with a polynomial bound on sample complexity. We furthermore prove that no polynomial time algorithm can provide a constant factor approximation better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of the effectiveness of our method on two complex real-world datasets.
[ { "created": "Wed, 4 Jul 2012 16:16:25 GMT", "version": "v1" } ]
2012-07-09
[ [ "Krause", "Andreas", "" ], [ "Guestrin", "Carlos E.", "" ] ]
A fundamental issue in real-world systems, such as sensor networks, is the selection of observations which most effectively reduce uncertainty. More specifically, we address the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model. We present the first efficient randomized algorithm providing a constant factor (1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high confidence. The algorithm leverages the theory of submodular functions, in combination with a polynomial bound on sample complexity. We furthermore prove that no polynomial time algorithm can provide a constant factor approximation better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of the effectiveness of our method on two complex real-world datasets.
2404.08601
Alexander Sommers Mr.
Alexander Sommers and Somayeh Bakhtiari Ramezani and Logan Cummins and Sudip Mittal and Shahram Rahimi and Maria Seale and Joseph Jaboure
Generating Synthetic Time Series Data for Cyber-Physical Systems
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Data augmentation is an important facilitator of deep learning applications in the time series domain. A gap is identified in the literature, demonstrating sparse exploration of the transformer, the dominant sequence model, for data augmentation in time series. A architecture hybridizing several successful priors is put forth and tested using a powerful time domain similarity metric. Results suggest the challenge of this domain, and several valuable directions for future work.
[ { "created": "Fri, 12 Apr 2024 16:55:08 GMT", "version": "v1" } ]
2024-04-15
[ [ "Sommers", "Alexander", "" ], [ "Ramezani", "Somayeh Bakhtiari", "" ], [ "Cummins", "Logan", "" ], [ "Mittal", "Sudip", "" ], [ "Rahimi", "Shahram", "" ], [ "Seale", "Maria", "" ], [ "Jaboure", "Joseph", "" ] ]
Data augmentation is an important facilitator of deep learning applications in the time series domain. A gap is identified in the literature, demonstrating sparse exploration of the transformer, the dominant sequence model, for data augmentation in time series. A architecture hybridizing several successful priors is put forth and tested using a powerful time domain similarity metric. Results suggest the challenge of this domain, and several valuable directions for future work.
1009.4521
S. M. Kamruzzaman
S. M. Kamruzzaman
CR-MAC: A multichannel MAC protocol for cognitive radio ad hoc networks
14 Pages, International Journal
International Journal of Computer Networks & Communications (IJCNC), Vol.2, No.5, pp. 1-14, Sep. 2010
10.5121/ijcnc.2010.2501
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a cross-layer based cognitive radio multichannel medium access control (MAC) protocol with TDMA, which integrate the spectrum sensing at physical (PHY) layer and the packet scheduling at MAC layer, for the ad hoc wireless networks. The IEEE 802.11 standard allows for the use of multiple channels available at the PHY layer, but its MAC protocol is designed only for a single channel. A single channel MAC protocol does not work well in a multichannel environment, because of the multichannel hidden terminal problem. Our proposed protocol enables secondary users (SUs) to utilize multiple channels by switching channels dynamically, thus increasing network throughput. In our proposed protocol, each SU is equipped with only one spectrum agile transceiver, but solves the multichannel hidden terminal problem using temporal synchronization. The proposed cognitive radio MAC (CR-MAC) protocol allows SUs to identify and use the unused frequency spectrum in a way that constrains the level of interference to the primary users (PUs). Our scheme improves network throughput significantly, especially when the network is highly congested. The simulation results show that our proposed CR-MAC protocol successfully exploits multiple channels and significantly improves network performance by using the licensed spectrum band opportunistically and protects PUs from interference, even in hidden terminal situations.
[ { "created": "Thu, 23 Sep 2010 05:38:52 GMT", "version": "v1" } ]
2010-09-28
[ [ "Kamruzzaman", "S. M.", "" ] ]
This paper proposes a cross-layer based cognitive radio multichannel medium access control (MAC) protocol with TDMA, which integrate the spectrum sensing at physical (PHY) layer and the packet scheduling at MAC layer, for the ad hoc wireless networks. The IEEE 802.11 standard allows for the use of multiple channels available at the PHY layer, but its MAC protocol is designed only for a single channel. A single channel MAC protocol does not work well in a multichannel environment, because of the multichannel hidden terminal problem. Our proposed protocol enables secondary users (SUs) to utilize multiple channels by switching channels dynamically, thus increasing network throughput. In our proposed protocol, each SU is equipped with only one spectrum agile transceiver, but solves the multichannel hidden terminal problem using temporal synchronization. The proposed cognitive radio MAC (CR-MAC) protocol allows SUs to identify and use the unused frequency spectrum in a way that constrains the level of interference to the primary users (PUs). Our scheme improves network throughput significantly, especially when the network is highly congested. The simulation results show that our proposed CR-MAC protocol successfully exploits multiple channels and significantly improves network performance by using the licensed spectrum band opportunistically and protects PUs from interference, even in hidden terminal situations.
2102.08228
James Cheney
James Cheney, Adriane Chapman, Joy Davidson, and Alistair Forbes
Data provenance, curation and quality in metrology
null
null
10.1142/9789811242380_0009
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data metrology -- the assessment of the quality of data -- particularly in scientific and industrial settings, has emerged as an important requirement for the UK National Physical Laboratory (NPL) and other national metrology institutes. Data provenance and data curation are key components for emerging understanding of data metrology. However, to date provenance research has had limited visibility to or uptake in metrology. In this work, we summarize a scoping study carried out with NPL staff and industrial participants to understand their current and future needs for provenance, curation and data quality. We then survey provenance technology and standards that are relevant to metrology. We analyse the gaps between requirements and the current state of the art.
[ { "created": "Tue, 16 Feb 2021 15:44:27 GMT", "version": "v1" } ]
2023-04-12
[ [ "Cheney", "James", "" ], [ "Chapman", "Adriane", "" ], [ "Davidson", "Joy", "" ], [ "Forbes", "Alistair", "" ] ]
Data metrology -- the assessment of the quality of data -- particularly in scientific and industrial settings, has emerged as an important requirement for the UK National Physical Laboratory (NPL) and other national metrology institutes. Data provenance and data curation are key components for emerging understanding of data metrology. However, to date provenance research has had limited visibility to or uptake in metrology. In this work, we summarize a scoping study carried out with NPL staff and industrial participants to understand their current and future needs for provenance, curation and data quality. We then survey provenance technology and standards that are relevant to metrology. We analyse the gaps between requirements and the current state of the art.
2104.11435
Beomyoung Kim
Beomyoung Kim, Janghyeon Lee, Sihaeng Lee, Doyeon Kim, and Junmo Kim
TricubeNet: 2D Kernel-Based Object Representation for Weakly-Occluded Oriented Object Detection
WACV 2022, Accepted
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach for oriented object detection, named TricubeNet, which localizes oriented objects using visual cues ($i.e.,$ heatmap) instead of oriented box offsets regression. We represent each object as a 2D Tricube kernel and extract bounding boxes using simple image-processing algorithms. Our approach is able to (1) obtain well-arranged boxes from visual cues, (2) solve the angle discontinuity problem, and (3) can save computational complexity due to our anchor-free modeling. To further boost the performance, we propose some effective techniques for size-invariant loss, reducing false detections, extracting rotation-invariant features, and heatmap refinement. To demonstrate the effectiveness of our TricubeNet, we experiment on various tasks for weakly-occluded oriented object detection: detection in an aerial image, densely packed object image, and text image. The extensive experimental results show that our TricubeNet is quite effective for oriented object detection. Code is available at https://github.com/qjadud1994/TricubeNet.
[ { "created": "Fri, 23 Apr 2021 06:50:28 GMT", "version": "v1" }, { "created": "Tue, 5 Oct 2021 11:54:49 GMT", "version": "v2" } ]
2021-10-06
[ [ "Kim", "Beomyoung", "" ], [ "Lee", "Janghyeon", "" ], [ "Lee", "Sihaeng", "" ], [ "Kim", "Doyeon", "" ], [ "Kim", "Junmo", "" ] ]
We present a novel approach for oriented object detection, named TricubeNet, which localizes oriented objects using visual cues ($i.e.,$ heatmap) instead of oriented box offsets regression. We represent each object as a 2D Tricube kernel and extract bounding boxes using simple image-processing algorithms. Our approach is able to (1) obtain well-arranged boxes from visual cues, (2) solve the angle discontinuity problem, and (3) can save computational complexity due to our anchor-free modeling. To further boost the performance, we propose some effective techniques for size-invariant loss, reducing false detections, extracting rotation-invariant features, and heatmap refinement. To demonstrate the effectiveness of our TricubeNet, we experiment on various tasks for weakly-occluded oriented object detection: detection in an aerial image, densely packed object image, and text image. The extensive experimental results show that our TricubeNet is quite effective for oriented object detection. Code is available at https://github.com/qjadud1994/TricubeNet.
2407.18795
Jesper Larsson Tr\"aff
Jesper Larsson Tr\"aff
Lectures on Parallel Computing
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
These lecture notes are designed to accompany an imaginary, virtual, undergraduate, one or two semester course on fundamentals of Parallel Computing as well as to serve as background and reference for graduate courses on High-Performance Computing, parallel algorithms and shared-memory multiprocessor programming. They introduce theoretical concepts and tools for expressing, analyzing and judging parallel algorithms and, in detail, cover the two most widely used concrete frameworks OpenMP and MPI as well as the threading interface pthreads for writing parallel programs for either shared or distributed memory parallel computers with emphasis on general concepts and principles. Code examples are given in a C-like style and many are actual, correct C code. The lecture notes deliberately do not cover GPU architectures and GPU programming, but the general concerns, guidelines and principles (time, work, cost, efficiency, scalability, memory structure and bandwidth) will be just as relevant for efficiently utilizing various GPU architectures. Likewise, the lecture notes focus on deterministic algorithms only and do not use randomization. The student of this material will find it instructive to take the time to understand concepts and algorithms visually. The exercises can be used for self-study and as inspiration for small implementation projects in OpenMP and MPI that can and should accompany any serious course on Parallel Computing. The student will benefit from actually implementing and carefully benchmarking the suggested algorithms on the parallel computing system that may or should be made available as part of such a Parallel Computing course. In class, the exercises can be used as basis for hand-ins and small programming projects for which sufficient, additional detail and precision should be provided by the instructor.
[ { "created": "Fri, 26 Jul 2024 14:58:22 GMT", "version": "v1" } ]
2024-07-29
[ [ "Träff", "Jesper Larsson", "" ] ]
These lecture notes are designed to accompany an imaginary, virtual, undergraduate, one or two semester course on fundamentals of Parallel Computing as well as to serve as background and reference for graduate courses on High-Performance Computing, parallel algorithms and shared-memory multiprocessor programming. They introduce theoretical concepts and tools for expressing, analyzing and judging parallel algorithms and, in detail, cover the two most widely used concrete frameworks OpenMP and MPI as well as the threading interface pthreads for writing parallel programs for either shared or distributed memory parallel computers with emphasis on general concepts and principles. Code examples are given in a C-like style and many are actual, correct C code. The lecture notes deliberately do not cover GPU architectures and GPU programming, but the general concerns, guidelines and principles (time, work, cost, efficiency, scalability, memory structure and bandwidth) will be just as relevant for efficiently utilizing various GPU architectures. Likewise, the lecture notes focus on deterministic algorithms only and do not use randomization. The student of this material will find it instructive to take the time to understand concepts and algorithms visually. The exercises can be used for self-study and as inspiration for small implementation projects in OpenMP and MPI that can and should accompany any serious course on Parallel Computing. The student will benefit from actually implementing and carefully benchmarking the suggested algorithms on the parallel computing system that may or should be made available as part of such a Parallel Computing course. In class, the exercises can be used as basis for hand-ins and small programming projects for which sufficient, additional detail and precision should be provided by the instructor.
1805.05713
Mari Kobayashi
Mari Kobayashi, Giuseppe Caire, Gerhard Kramer
Joint State Sensing and Communication: Optimal Tradeoff for a Memoryless Case
To be presented at IEEE International Symposium on Information Theory (ISIT), Jun. 2018
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A communication setup is considered where a transmitter wishes to simultaneously sense its channel state and convey a message to a receiver. The state is estimated at the transmitter by means of generalized feedback, i.e. a strictly causal channel output that is observed at the transmitter. The scenario is motivated by a joint radar and communication system where the radar and data applications share the same frequency band. For the case of a memoryless channel with i.i.d. state sequences, we characterize the capacity-distortion tradeoff, defined as the best achievable rate below which a message can be conveyed reliably while satisfying some distortion constraint on state sensing. An iterative algorithm is proposed to optimize the input probability distribution. Examples demonstrate the benefits of joint sensing and communication as compared to a separation-based approach.
[ { "created": "Tue, 15 May 2018 11:35:25 GMT", "version": "v1" } ]
2018-05-16
[ [ "Kobayashi", "Mari", "" ], [ "Caire", "Giuseppe", "" ], [ "Kramer", "Gerhard", "" ] ]
A communication setup is considered where a transmitter wishes to simultaneously sense its channel state and convey a message to a receiver. The state is estimated at the transmitter by means of generalized feedback, i.e. a strictly causal channel output that is observed at the transmitter. The scenario is motivated by a joint radar and communication system where the radar and data applications share the same frequency band. For the case of a memoryless channel with i.i.d. state sequences, we characterize the capacity-distortion tradeoff, defined as the best achievable rate below which a message can be conveyed reliably while satisfying some distortion constraint on state sensing. An iterative algorithm is proposed to optimize the input probability distribution. Examples demonstrate the benefits of joint sensing and communication as compared to a separation-based approach.
2106.10910
Fotios Lazarinis
Fotis Lazarinis, Dimitris Kanellopoulos
Fostering Student Engagement in a Mobile Formative Assessment System for High-School Economics
10 pages, 3 images
Social Education Research, 2021
10.37256/ser.222021747
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
In a mobile learning environment, students can learn via mobile devices without being limited by time and space. Therefore, it is vital to develop tools to assist students to learn and assess their knowledge in such environments. This paper presents a tool/application for formative self-assessment. The tool supports the selection of questions based on user-defined criteria concerning (1) the difficulty level; (2) the associated concepts; and (3) the purposes of the test taker. The main purpose of the presented tool is to better support the learning aims of the participants and to increase their engagement in the learning process. The focus of this study is to evaluate the tool using quizzes in Microeconomics to realize its potential in this specific domain. Teachers and students were involved in the experiments conducted. The experiments demonstrated that the presented tool is usable; it motivates the students and improves their understanding
[ { "created": "Mon, 21 Jun 2021 08:15:49 GMT", "version": "v1" } ]
2021-06-22
[ [ "Lazarinis", "Fotis", "" ], [ "Kanellopoulos", "Dimitris", "" ] ]
In a mobile learning environment, students can learn via mobile devices without being limited by time and space. Therefore, it is vital to develop tools to assist students to learn and assess their knowledge in such environments. This paper presents a tool/application for formative self-assessment. The tool supports the selection of questions based on user-defined criteria concerning (1) the difficulty level; (2) the associated concepts; and (3) the purposes of the test taker. The main purpose of the presented tool is to better support the learning aims of the participants and to increase their engagement in the learning process. The focus of this study is to evaluate the tool using quizzes in Microeconomics to realize its potential in this specific domain. Teachers and students were involved in the experiments conducted. The experiments demonstrated that the presented tool is usable; it motivates the students and improves their understanding
2012.04457
Minchen Li
Minchen Li, Danny M. Kaufman, Chenfanfu Jiang
Codimensional Incremental Potential Contact
null
null
10.1145/3450626.3459767
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We extend the incremental potential contact (IPC) model for contacting elastodynamics to resolve systems composed of codimensional DOFs in arbitrary combination. This enables a unified, interpenetration-free, robust, and stable simulation framework that couples codimension-0,1,2, and 3 geometries seamlessly with frictional contact. Extending IPC to thin structures poses new challenges in computing strain, modeling thickness and determining collisions. To address these challenges we propose three corresponding contributions. First, we introduce a C2 constitutive barrier model that directly enforces strain limiting as an energy potential while preserving rest state. This provides energetically-consistent strain limiting models (both isotropic and anisotropic) for cloth that enable strict satisfaction of strain-limit inequalities with direct coupling to both elastodynamics and contact via minimization of the incremental potential. Second, to capture the geometric thickness of codimensional domains we extend the IPC model to directly enforce distance offsets. Our treatment imposes a strict guarantee that mid-surfaces (resp. mid-lines) of shells (resp. rods) will not move closer than applied thickness values. This enables us to account for thickness in the contact behavior of codimensional structures and so robustly capture challenging contacting geometries; a number of which, to our knowledge, have not been simulated before. Third, codimensional models, especially with modeled thickness, mandate strict accuracy requirements that pose a severe challenge to all existing continuous collision detection (CCD) methods. To address these limitations we develop a new, efficient, simple-to-implement additive CCD (ACCD) method that applies conservative advancement to iteratively refine a lower bound for deforming primitives, converging to time of impact.
[ { "created": "Mon, 7 Dec 2020 13:26:52 GMT", "version": "v1" }, { "created": "Fri, 29 Jan 2021 21:14:56 GMT", "version": "v2" }, { "created": "Wed, 5 May 2021 22:49:59 GMT", "version": "v3" } ]
2021-05-07
[ [ "Li", "Minchen", "" ], [ "Kaufman", "Danny M.", "" ], [ "Jiang", "Chenfanfu", "" ] ]
We extend the incremental potential contact (IPC) model for contacting elastodynamics to resolve systems composed of codimensional DOFs in arbitrary combination. This enables a unified, interpenetration-free, robust, and stable simulation framework that couples codimension-0,1,2, and 3 geometries seamlessly with frictional contact. Extending IPC to thin structures poses new challenges in computing strain, modeling thickness and determining collisions. To address these challenges we propose three corresponding contributions. First, we introduce a C2 constitutive barrier model that directly enforces strain limiting as an energy potential while preserving rest state. This provides energetically-consistent strain limiting models (both isotropic and anisotropic) for cloth that enable strict satisfaction of strain-limit inequalities with direct coupling to both elastodynamics and contact via minimization of the incremental potential. Second, to capture the geometric thickness of codimensional domains we extend the IPC model to directly enforce distance offsets. Our treatment imposes a strict guarantee that mid-surfaces (resp. mid-lines) of shells (resp. rods) will not move closer than applied thickness values. This enables us to account for thickness in the contact behavior of codimensional structures and so robustly capture challenging contacting geometries; a number of which, to our knowledge, have not been simulated before. Third, codimensional models, especially with modeled thickness, mandate strict accuracy requirements that pose a severe challenge to all existing continuous collision detection (CCD) methods. To address these limitations we develop a new, efficient, simple-to-implement additive CCD (ACCD) method that applies conservative advancement to iteratively refine a lower bound for deforming primitives, converging to time of impact.
1706.06363
Krzysztof Wr\'obel
Krzysztof Wr\'obel, Maciej Wielgosz, Marcin Pietro\'n, Micha{\l} Karwatowski, Aleksander Smywi\'nski-Pohl
Improving text classification with vectors of reduced precision
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the analysis of the impact of a floating-point number precision reduction on the quality of text classification. The precision reduction of the vectors representing the data (e.g. TF-IDF representation in our case) allows for a decrease of computing time and memory footprint on dedicated hardware platforms. The impact of precision reduction on the classification quality was performed on 5 corpora, using 4 different classifiers. Also, dimensionality reduction was taken into account. Results indicate that the precision reduction improves classification accuracy for most cases (up to 25% of error reduction). In general, the reduction from 64 to 4 bits gives the best scores and ensures that the results will not be worse than with the full floating-point representation.
[ { "created": "Tue, 20 Jun 2017 11:13:06 GMT", "version": "v1" } ]
2017-06-21
[ [ "Wróbel", "Krzysztof", "" ], [ "Wielgosz", "Maciej", "" ], [ "Pietroń", "Marcin", "" ], [ "Karwatowski", "Michał", "" ], [ "Smywiński-Pohl", "Aleksander", "" ] ]
This paper presents the analysis of the impact of a floating-point number precision reduction on the quality of text classification. The precision reduction of the vectors representing the data (e.g. TF-IDF representation in our case) allows for a decrease of computing time and memory footprint on dedicated hardware platforms. The impact of precision reduction on the classification quality was performed on 5 corpora, using 4 different classifiers. Also, dimensionality reduction was taken into account. Results indicate that the precision reduction improves classification accuracy for most cases (up to 25% of error reduction). In general, the reduction from 64 to 4 bits gives the best scores and ensures that the results will not be worse than with the full floating-point representation.
2006.15009
Thomas Moerland
Thomas M. Moerland, Joost Broekens, Aske Plaat, Catholijn M. Jonker
A Unifying Framework for Reinforcement Learning and Planning
null
null
null
null
cs.LG cs.AI cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem, then we might be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying algorithmic framework for reinforcement learning and planning (FRAP), which identifies underlying dimensions on which MDP planning and learning algorithms have to decide. At the end of the paper, we compare a variety of well-known planning, model-free and model-based RL algorithms along these dimensions. Altogether, the framework may help provide deeper insight in the algorithmic design space of planning and reinforcement learning.
[ { "created": "Fri, 26 Jun 2020 14:30:41 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2020 08:52:43 GMT", "version": "v2" }, { "created": "Thu, 23 Jul 2020 15:02:03 GMT", "version": "v3" }, { "created": "Thu, 31 Mar 2022 08:06:35 GMT", "version": "v4" } ]
2022-04-01
[ [ "Moerland", "Thomas M.", "" ], [ "Broekens", "Joost", "" ], [ "Plaat", "Aske", "" ], [ "Jonker", "Catholijn M.", "" ] ]
Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem, then we might be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying algorithmic framework for reinforcement learning and planning (FRAP), which identifies underlying dimensions on which MDP planning and learning algorithms have to decide. At the end of the paper, we compare a variety of well-known planning, model-free and model-based RL algorithms along these dimensions. Altogether, the framework may help provide deeper insight in the algorithmic design space of planning and reinforcement learning.
2303.06745
Arthur Bik
Arthur Bik, Alessandro Neri
Higher-degree symmetric rank-metric codes
26 pages
null
null
null
cs.IT math.AC math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over fields of characteristic unequal to $2$, we can identify symmetric matrices with homogeneous polynomials of degree $2$. This allows us to view symmetric rank-metric codes as living inside the space of such polynomials. In this paper, we generalize the construction of symmetric Gabidulin codes to polynomials of degree $d>2$ over field of characteristic $0$ or $>d$. To do so, we equip the space of homogeneous polynomials of degree $d\geq 2$ with the metric induced by the essential rank, which is the minimal number of linear forms needed to express a polynomial. We provide bounds on the minimal distance and dimension of the essential-rank metric codes we construct and provide an efficient decoding algorithm. Finally, we show how essential-rank metric codes can be seen as special instances of rank-metric codes and compare our construction to known rank-metric codes with the same parameters.
[ { "created": "Sun, 12 Mar 2023 20:37:32 GMT", "version": "v1" } ]
2023-03-14
[ [ "Bik", "Arthur", "" ], [ "Neri", "Alessandro", "" ] ]
Over fields of characteristic unequal to $2$, we can identify symmetric matrices with homogeneous polynomials of degree $2$. This allows us to view symmetric rank-metric codes as living inside the space of such polynomials. In this paper, we generalize the construction of symmetric Gabidulin codes to polynomials of degree $d>2$ over field of characteristic $0$ or $>d$. To do so, we equip the space of homogeneous polynomials of degree $d\geq 2$ with the metric induced by the essential rank, which is the minimal number of linear forms needed to express a polynomial. We provide bounds on the minimal distance and dimension of the essential-rank metric codes we construct and provide an efficient decoding algorithm. Finally, we show how essential-rank metric codes can be seen as special instances of rank-metric codes and compare our construction to known rank-metric codes with the same parameters.
2211.05739
Mohak Chadha
Mohamed Elzohairy, Mohak Chadha, Anshul Jindal, Andreas Grafberger, Jianfeng Gu, Michael Gerndt, Osama Abboud
FedLesScan: Mitigating Stragglers in Serverless Federated Learning
IEEE BigData 2022
null
10.1109/BigData55660.2022.10021037
null
cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients while keeping the training data local. While most prior work on designing systems for FL has focused on using stateful always running components, recent work has shown that components in an FL system can greatly benefit from the usage of serverless computing and Function-as-a-Service technologies. To this end, distributed training of models with serverless FL systems can be more resource-efficient and cheaper than conventional FL systems. However, serverless FL systems still suffer from the presence of stragglers, i.e., slow clients due to their resource and statistical heterogeneity. While several strategies have been proposed for mitigating stragglers in FL, most methodologies do not account for the particular characteristics of serverless environments, i.e., cold-starts, performance variations, and the ephemeral stateless nature of the function instances. Towards this, we propose FedLesScan, a novel clustering-based semi-asynchronous training strategy, specifically tailored for serverless FL. FedLesScan dynamically adapts to the behaviour of clients and minimizes the effect of stragglers on the overall system. We implement our strategy by extending an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate our strategy using the 2nd generation Google Cloud Functions with four datasets and varying percentages of stragglers. Results from our experiments show that compared to other approaches FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
[ { "created": "Thu, 10 Nov 2022 18:17:41 GMT", "version": "v1" }, { "created": "Mon, 28 Nov 2022 17:58:34 GMT", "version": "v2" } ]
2023-02-21
[ [ "Elzohairy", "Mohamed", "" ], [ "Chadha", "Mohak", "" ], [ "Jindal", "Anshul", "" ], [ "Grafberger", "Andreas", "" ], [ "Gu", "Jianfeng", "" ], [ "Gerndt", "Michael", "" ], [ "Abboud", "Osama", "" ] ]
Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients while keeping the training data local. While most prior work on designing systems for FL has focused on using stateful always running components, recent work has shown that components in an FL system can greatly benefit from the usage of serverless computing and Function-as-a-Service technologies. To this end, distributed training of models with serverless FL systems can be more resource-efficient and cheaper than conventional FL systems. However, serverless FL systems still suffer from the presence of stragglers, i.e., slow clients due to their resource and statistical heterogeneity. While several strategies have been proposed for mitigating stragglers in FL, most methodologies do not account for the particular characteristics of serverless environments, i.e., cold-starts, performance variations, and the ephemeral stateless nature of the function instances. Towards this, we propose FedLesScan, a novel clustering-based semi-asynchronous training strategy, specifically tailored for serverless FL. FedLesScan dynamically adapts to the behaviour of clients and minimizes the effect of stragglers on the overall system. We implement our strategy by extending an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate our strategy using the 2nd generation Google Cloud Functions with four datasets and varying percentages of stragglers. Results from our experiments show that compared to other approaches FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
2304.06364
Wanjun Zhong
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen and Nan Duan
AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models
19 pages
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evaluating the general abilities of foundation models to tackle human-level tasks is a vital aspect of their development and application in the pursuit of Artificial General Intelligence (AGI). Traditional benchmarks, which rely on artificial datasets, may not accurately represent human-level capabilities. In this paper, we introduce AGIEval, a novel benchmark specifically designed to assess foundation model in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models, including GPT-4, ChatGPT, and Text-Davinci-003, using this benchmark. Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5% accuracy on the English test of the Chinese national college entrance exam. This demonstrates the extraordinary performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks that require complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal these models' strengths and limitations, providing valuable insights into future directions for enhancing their general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a more meaningful and robust evaluation of foundation models' performance in real-world scenarios. The data, code, and all model outputs are released in https://github.com/ruixiangcui/AGIEval.
[ { "created": "Thu, 13 Apr 2023 09:39:30 GMT", "version": "v1" }, { "created": "Mon, 18 Sep 2023 14:23:02 GMT", "version": "v2" } ]
2023-09-19
[ [ "Zhong", "Wanjun", "" ], [ "Cui", "Ruixiang", "" ], [ "Guo", "Yiduo", "" ], [ "Liang", "Yaobo", "" ], [ "Lu", "Shuai", "" ], [ "Wang", "Yanlin", "" ], [ "Saied", "Amin", "" ], [ "Chen", "Weizhu", "" ], [ "Duan", "Nan", "" ] ]
Evaluating the general abilities of foundation models to tackle human-level tasks is a vital aspect of their development and application in the pursuit of Artificial General Intelligence (AGI). Traditional benchmarks, which rely on artificial datasets, may not accurately represent human-level capabilities. In this paper, we introduce AGIEval, a novel benchmark specifically designed to assess foundation model in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models, including GPT-4, ChatGPT, and Text-Davinci-003, using this benchmark. Impressively, GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5% accuracy on the English test of the Chinese national college entrance exam. This demonstrates the extraordinary performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks that require complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal these models' strengths and limitations, providing valuable insights into future directions for enhancing their general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a more meaningful and robust evaluation of foundation models' performance in real-world scenarios. The data, code, and all model outputs are released in https://github.com/ruixiangcui/AGIEval.
1606.03753
Andrew Suk
Jacob Fox, Janos Pach, Andrew Suk
Approximating the rectilinear crossing number
Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016)
null
null
null
cs.CG math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A straight-line drawing of a graph $G$ is a mapping which assigns to each vertex a point in the plane and to each edge a straight-line segment connecting the corresponding two points. The rectilinear crossing number of a graph $G$, $\overline{cr}(G)$, is the minimum number of crossing edges in any straight-line drawing of $G$. Determining or estimating $\overline{cr}(G)$ appears to be a difficult problem, and deciding if $\overline{cr}(G)\leq k$ is known to be NP-hard. In fact, the asymptotic behavior of $\overline{cr}(K_n)$ is still unknown. In this paper, we present a deterministic $n^{2+o(1)}$-time algorithm that finds a straight-line drawing of any $n$-vertex graph $G$ with $\overline{cr}(G) + o(n^4)$ crossing edges. Together with the well-known Crossing Lemma due to Ajtai et al. and Leighton, this result implies that for any dense $n$-vertex graph $G$, one can efficiently find a straight-line drawing of $G$ with $(1 + o(1))\overline{cr}(G)$ crossing edges.
[ { "created": "Sun, 12 Jun 2016 18:42:44 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2016 18:27:34 GMT", "version": "v2" } ]
2016-09-08
[ [ "Fox", "Jacob", "" ], [ "Pach", "Janos", "" ], [ "Suk", "Andrew", "" ] ]
A straight-line drawing of a graph $G$ is a mapping which assigns to each vertex a point in the plane and to each edge a straight-line segment connecting the corresponding two points. The rectilinear crossing number of a graph $G$, $\overline{cr}(G)$, is the minimum number of crossing edges in any straight-line drawing of $G$. Determining or estimating $\overline{cr}(G)$ appears to be a difficult problem, and deciding if $\overline{cr}(G)\leq k$ is known to be NP-hard. In fact, the asymptotic behavior of $\overline{cr}(K_n)$ is still unknown. In this paper, we present a deterministic $n^{2+o(1)}$-time algorithm that finds a straight-line drawing of any $n$-vertex graph $G$ with $\overline{cr}(G) + o(n^4)$ crossing edges. Together with the well-known Crossing Lemma due to Ajtai et al. and Leighton, this result implies that for any dense $n$-vertex graph $G$, one can efficiently find a straight-line drawing of $G$ with $(1 + o(1))\overline{cr}(G)$ crossing edges.
1805.09730
Abel Gonzalez-Garcia
Abel Gonzalez-Garcia, Joost van de Weijer, Yoshua Bengio
Image-to-image translation for cross-domain disentanglement
Accepted to NIPS 2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domain-specific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.
[ { "created": "Thu, 24 May 2018 15:30:23 GMT", "version": "v1" }, { "created": "Fri, 1 Jun 2018 14:58:32 GMT", "version": "v2" }, { "created": "Sun, 4 Nov 2018 17:27:04 GMT", "version": "v3" } ]
2018-11-06
[ [ "Gonzalez-Garcia", "Abel", "" ], [ "van de Weijer", "Joost", "" ], [ "Bengio", "Yoshua", "" ] ]
Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domain-specific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.
1904.05449
Mengyu Dai
Mengyu Dai, Zhengwu Zhang, and Anuj Srivastava
Analyzing Dynamical Brain Functional Connectivity As Trajectories on Space of Covariance Matrices
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Published in IEEE Transactions on Medical Imaging, 2019
10.1109/TMI.2019.2931708
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human brain functional connectivity (FC) is often measured as the similarity of functional MRI responses across brain regions when a brain is either resting or performing a task. This paper aims to statistically analyze the dynamic nature of FC by representing the collective time-series data, over a set of brain regions, as a trajectory on the space of covariance matrices, or symmetric-positive definite matrices (SPDMs). We use a recently developed metric on the space of SPDMs for quantifying differences across FC observations, and for clustering and classification of FC trajectories. To facilitate large scale and high-dimensional data analysis, we propose a novel, metric-based dimensionality reduction technique to reduce data from large SPDMs to small SPDMs. We illustrate this comprehensive framework using data from the Human Connectome Project (HCP) database for multiple subjects and tasks, with task classification rates that match or outperform state-of-the-art techniques.
[ { "created": "Wed, 10 Apr 2019 21:27:42 GMT", "version": "v1" }, { "created": "Wed, 15 May 2019 18:49:33 GMT", "version": "v2" } ]
2020-03-05
[ [ "Dai", "Mengyu", "" ], [ "Zhang", "Zhengwu", "" ], [ "Srivastava", "Anuj", "" ] ]
Human brain functional connectivity (FC) is often measured as the similarity of functional MRI responses across brain regions when a brain is either resting or performing a task. This paper aims to statistically analyze the dynamic nature of FC by representing the collective time-series data, over a set of brain regions, as a trajectory on the space of covariance matrices, or symmetric-positive definite matrices (SPDMs). We use a recently developed metric on the space of SPDMs for quantifying differences across FC observations, and for clustering and classification of FC trajectories. To facilitate large scale and high-dimensional data analysis, we propose a novel, metric-based dimensionality reduction technique to reduce data from large SPDMs to small SPDMs. We illustrate this comprehensive framework using data from the Human Connectome Project (HCP) database for multiple subjects and tasks, with task classification rates that match or outperform state-of-the-art techniques.
2306.06253
Siyan Zhao
Siyan Zhao and Aditya Grover
Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models
published at NeurIPS 2023, project page: https://siyan-zhao.github.io/decision-stacks/
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferrability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.
[ { "created": "Fri, 9 Jun 2023 20:52:16 GMT", "version": "v1" }, { "created": "Sun, 29 Oct 2023 21:48:34 GMT", "version": "v2" } ]
2023-10-31
[ [ "Zhao", "Siyan", "" ], [ "Grover", "Aditya", "" ] ]
Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferrability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.
1706.06261
Huayi Duan
Huayi Duan, Cong Wang, Xingliang Yuan, Yajin Zhou, Qian Wang, Kui Ren
LightBox: Full-stack Protected Stateful Middlebox at Lightning Speed
Accepted at ACM CCS 2019
null
10.1145/3319535.3339814
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Running off-site software middleboxes at third-party service providers has been a popular practice. However, routing large volumes of raw traffic, which may carry sensitive information, to a remote site for processing raises severe security concerns. Prior solutions often abstract away important factors pertinent to real-world deployment. In particular, they overlook the significance of metadata protection and stateful processing. Unprotected traffic metadata like low-level headers, size and count, can be exploited to learn supposedly encrypted application contents. Meanwhile, tracking the states of 100,000s of flows concurrently is often indispensable in production-level middleboxes deployed at real networks. We present LightBox, the first system that can drive off-site middleboxes at near-native speed with stateful processing and the most comprehensive protection to date. Built upon commodity trusted hardware, Intel SGX, LightBox is the product of our systematic investigation of how to overcome the inherent limitations of secure enclaves using domain knowledge and customization. First, we introduce an elegant virtual network interface that allows convenient access to fully protected packets at line rate without leaving the enclave, as if from the trusted source network. Second, we provide complete flow state management for efficient stateful processing, by tailoring a set of data structures and algorithms optimized for the highly constrained enclave space. Extensive evaluations demonstrate that LightBox, with all security benefits, can achieve 10Gbps packet I/O, and that with case studies on three stateful middleboxes, it can operate at near-native speed.
[ { "created": "Tue, 20 Jun 2017 04:07:54 GMT", "version": "v1" }, { "created": "Sat, 4 Aug 2018 03:26:45 GMT", "version": "v2" }, { "created": "Wed, 16 Oct 2019 02:54:46 GMT", "version": "v3" } ]
2019-10-17
[ [ "Duan", "Huayi", "" ], [ "Wang", "Cong", "" ], [ "Yuan", "Xingliang", "" ], [ "Zhou", "Yajin", "" ], [ "Wang", "Qian", "" ], [ "Ren", "Kui", "" ] ]
Running off-site software middleboxes at third-party service providers has been a popular practice. However, routing large volumes of raw traffic, which may carry sensitive information, to a remote site for processing raises severe security concerns. Prior solutions often abstract away important factors pertinent to real-world deployment. In particular, they overlook the significance of metadata protection and stateful processing. Unprotected traffic metadata like low-level headers, size and count, can be exploited to learn supposedly encrypted application contents. Meanwhile, tracking the states of 100,000s of flows concurrently is often indispensable in production-level middleboxes deployed at real networks. We present LightBox, the first system that can drive off-site middleboxes at near-native speed with stateful processing and the most comprehensive protection to date. Built upon commodity trusted hardware, Intel SGX, LightBox is the product of our systematic investigation of how to overcome the inherent limitations of secure enclaves using domain knowledge and customization. First, we introduce an elegant virtual network interface that allows convenient access to fully protected packets at line rate without leaving the enclave, as if from the trusted source network. Second, we provide complete flow state management for efficient stateful processing, by tailoring a set of data structures and algorithms optimized for the highly constrained enclave space. Extensive evaluations demonstrate that LightBox, with all security benefits, can achieve 10Gbps packet I/O, and that with case studies on three stateful middleboxes, it can operate at near-native speed.
1009.2775
Grenville Croll
Ben G. Rittweger, Eoin Langan
Spreadsheet Risk Management in Organisations
12 Pages, 1 Table, 6 Figures
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 61-72 ISBN 978-1-905404-50-6
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper examines in the context of financial reporting, the controls that organisations have in place to manage spreadsheet risk and errors. There has been widespread research conducted in this area, both in Ireland and internationally. This paper describes a study involving 19 participants (2 case studies and 17 by survey) from Ireland. Three areas are examined; firstly, the extent of spreadsheet usage, secondly, the level of complexity employed in spreadsheets, and finally, the controls in place regarding spreadsheets. The findings support previous findings of Panko (1998), that errors occur frequently in spreadsheets and that there is little or unenforced controls employed, however this research finds that attitudes are changing with regard to spreadsheet risk and that one organisation is implementing a comprehensive project regarding policies on the development and control of spreadsheets. Further research could be undertaken in the future to examine the development of a "best practice model" both for the reduction in errors and to minimise the risk in spreadsheet usage.
[ { "created": "Tue, 14 Sep 2010 20:32:29 GMT", "version": "v1" } ]
2010-09-16
[ [ "Rittweger", "Ben G.", "" ], [ "Langan", "Eoin", "" ] ]
The paper examines in the context of financial reporting, the controls that organisations have in place to manage spreadsheet risk and errors. There has been widespread research conducted in this area, both in Ireland and internationally. This paper describes a study involving 19 participants (2 case studies and 17 by survey) from Ireland. Three areas are examined; firstly, the extent of spreadsheet usage, secondly, the level of complexity employed in spreadsheets, and finally, the controls in place regarding spreadsheets. The findings support previous findings of Panko (1998), that errors occur frequently in spreadsheets and that there is little or unenforced controls employed, however this research finds that attitudes are changing with regard to spreadsheet risk and that one organisation is implementing a comprehensive project regarding policies on the development and control of spreadsheets. Further research could be undertaken in the future to examine the development of a "best practice model" both for the reduction in errors and to minimise the risk in spreadsheet usage.
2306.03306
Aditya Acharya
Aditya Acharya, David Mount
Tracking Evolving labels using Cone based Oracles
This is an abstract of a presentation given at CG:YRF 2023. It has been made public for the benefit of the community and should be considered a preprint rather than a formally reviewed paper. Thus, this work is expected to appear in a conference with formal proceedings and/or in a journal
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
The evolving data framework was first proposed by Anagnostopoulos et al., where an evolver makes small changes to a structure behind the scenes. Instead of taking a single input and producing a single output, an algorithm judiciously probes the current state of the structure and attempts to continuously maintain a sketch of the structure that is as close as possible to its actual state. There have been a number of problems that have been studied in the evolving framework including our own work on labeled trees. We were motivated by the problem of maintaining a labeling in the plane, where updating the labels require physically moving them. Applications involve tracking evolving disease hot-spots via mobile testing units , and tracking unmanned aerial vehicles. To be specific, we consider the problem of tracking labeled nodes in the plane, where an evolver continuously swaps labels of any two nearby nodes in the background unknown to us. We are tasked with maintaining a hypothesis, an approximate sketch of the locations of these labels, which we can only update by physically moving them over a sparse graph. We assume the existence of an Oracle, which when suitably probed, guides us in fixing our hypothesis.
[ { "created": "Mon, 5 Jun 2023 23:27:36 GMT", "version": "v1" } ]
2023-06-07
[ [ "Acharya", "Aditya", "" ], [ "Mount", "David", "" ] ]
The evolving data framework was first proposed by Anagnostopoulos et al., where an evolver makes small changes to a structure behind the scenes. Instead of taking a single input and producing a single output, an algorithm judiciously probes the current state of the structure and attempts to continuously maintain a sketch of the structure that is as close as possible to its actual state. There have been a number of problems that have been studied in the evolving framework including our own work on labeled trees. We were motivated by the problem of maintaining a labeling in the plane, where updating the labels require physically moving them. Applications involve tracking evolving disease hot-spots via mobile testing units , and tracking unmanned aerial vehicles. To be specific, we consider the problem of tracking labeled nodes in the plane, where an evolver continuously swaps labels of any two nearby nodes in the background unknown to us. We are tasked with maintaining a hypothesis, an approximate sketch of the locations of these labels, which we can only update by physically moving them over a sparse graph. We assume the existence of an Oracle, which when suitably probed, guides us in fixing our hypothesis.
2010.13424
Tae-Young Chung
Tae-young Chung, Heansung Lee, Myeong Ah Cho, Suhwan Cho, Sangyoun Lee
Multi-object tracking with self-supervised associating network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-Object Tracking (MOT) is the task that has a lot of potential for development, and there are still many problems to be solved. In the traditional tracking by detection paradigm, There has been a lot of work on feature based object re-identification methods. However, this method has a lack of training data problem. For labeling multi-object tracking dataset, every detection in a video sequence need its location and IDs. Since assigning consecutive IDs to each detection in every sequence is a very labor-intensive task, current multi-object tracking dataset is not sufficient enough to train re-identification network. So in this paper, we propose a novel self-supervised learning method using a lot of short videos which has no human labeling, and improve the tracking performance through the re-identification network trained in the self-supervised manner to solve the lack of training data problem. Despite the re-identification network is trained in a self-supervised manner, it achieves the state-of-the-art performance of MOTA 62.0\% and IDF1 62.6\% on the MOT17 test benchmark. Furthermore, the performance is improved as much as learned with a large amount of data, it shows the potential of self-supervised method.
[ { "created": "Mon, 26 Oct 2020 08:48:23 GMT", "version": "v1" } ]
2020-10-27
[ [ "Chung", "Tae-young", "" ], [ "Lee", "Heansung", "" ], [ "Cho", "Myeong Ah", "" ], [ "Cho", "Suhwan", "" ], [ "Lee", "Sangyoun", "" ] ]
Multi-Object Tracking (MOT) is the task that has a lot of potential for development, and there are still many problems to be solved. In the traditional tracking by detection paradigm, There has been a lot of work on feature based object re-identification methods. However, this method has a lack of training data problem. For labeling multi-object tracking dataset, every detection in a video sequence need its location and IDs. Since assigning consecutive IDs to each detection in every sequence is a very labor-intensive task, current multi-object tracking dataset is not sufficient enough to train re-identification network. So in this paper, we propose a novel self-supervised learning method using a lot of short videos which has no human labeling, and improve the tracking performance through the re-identification network trained in the self-supervised manner to solve the lack of training data problem. Despite the re-identification network is trained in a self-supervised manner, it achieves the state-of-the-art performance of MOTA 62.0\% and IDF1 62.6\% on the MOT17 test benchmark. Furthermore, the performance is improved as much as learned with a large amount of data, it shows the potential of self-supervised method.
2311.13656
Yuzhe You
Yuzhe You, Jarvis Tse, and Jian Zhao
Panda or not Panda? Understanding Adversarial Attacks with Interactive Visualization
null
null
null
null
cs.HC cs.CV
http://creativecommons.org/licenses/by/4.0/
Adversarial machine learning (AML) studies attacks that can fool machine learning algorithms into generating incorrect outcomes as well as the defenses against worst-case attacks to strengthen model robustness. Specifically for image classification, it is challenging to understand adversarial attacks due to their use of subtle perturbations that are not human-interpretable, as well as the variability of attack impacts influenced by diverse methodologies, instance differences, and model architectures. Through a design study with AML learners and teachers, we introduce AdvEx, a multi-level interactive visualization system that comprehensively presents the properties and impacts of evasion attacks on different image classifiers for novice AML learners. We quantitatively and qualitatively assessed AdvEx in a two-part evaluation including user studies and expert interviews. Our results show that AdvEx is not only highly effective as a visualization tool for understanding AML mechanisms, but also provides an engaging and enjoyable learning experience, thus demonstrating its overall benefits for AML learners.
[ { "created": "Wed, 22 Nov 2023 19:14:25 GMT", "version": "v1" } ]
2023-11-27
[ [ "You", "Yuzhe", "" ], [ "Tse", "Jarvis", "" ], [ "Zhao", "Jian", "" ] ]
Adversarial machine learning (AML) studies attacks that can fool machine learning algorithms into generating incorrect outcomes as well as the defenses against worst-case attacks to strengthen model robustness. Specifically for image classification, it is challenging to understand adversarial attacks due to their use of subtle perturbations that are not human-interpretable, as well as the variability of attack impacts influenced by diverse methodologies, instance differences, and model architectures. Through a design study with AML learners and teachers, we introduce AdvEx, a multi-level interactive visualization system that comprehensively presents the properties and impacts of evasion attacks on different image classifiers for novice AML learners. We quantitatively and qualitatively assessed AdvEx in a two-part evaluation including user studies and expert interviews. Our results show that AdvEx is not only highly effective as a visualization tool for understanding AML mechanisms, but also provides an engaging and enjoyable learning experience, thus demonstrating its overall benefits for AML learners.
1601.06497
Da Yan
Da Yan, James Cheng, M. Tamer \"Ozsu, Fan Yang, Yi Lu, John C.S. Lui, Qizhen Zhang, Wilfred Ng
Quegel: A General-Purpose Query-Centric Framework for Querying Big Graphs
This is a full version of our VLDB paper
null
null
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pioneered by Google's Pregel, many distributed systems have been developed for large-scale graph analytics. These systems expose the user-friendly "think like a vertex" programming interface to users, and exhibit good horizontal scalability. However, these systems are designed for tasks where the majority of graph vertices participate in computation, but are not suitable for processing light-workload graph queries where only a small fraction of vertices need to be accessed. The programming paradigm adopted by these systems can seriously under-utilize the resources in a cluster for graph query processing. In this work, we develop a new open-source system, called Quegel, for querying big graphs, which treats queries as first-class citizens in the design of its computing model. Users only need to specify the Pregel-like algorithm for a generic query, and Quegel processes light-workload graph queries on demand using a novel superstep-sharing execution model to effectively utilize the cluster resources. Quegel further provides a convenient interface for constructing graph indexes, which significantly improve query performance but are not supported by existing graph-parallel systems. Our experiments verified that Quegel is highly efficient in answering various types of graph queries and is up to orders of magnitude faster than existing systems.
[ { "created": "Mon, 25 Jan 2016 07:27:53 GMT", "version": "v1" } ]
2016-01-26
[ [ "Yan", "Da", "" ], [ "Cheng", "James", "" ], [ "Özsu", "M. Tamer", "" ], [ "Yang", "Fan", "" ], [ "Lu", "Yi", "" ], [ "Lui", "John C. S.", "" ], [ "Zhang", "Qizhen", "" ], [ "Ng", "Wilfred", "" ] ]
Pioneered by Google's Pregel, many distributed systems have been developed for large-scale graph analytics. These systems expose the user-friendly "think like a vertex" programming interface to users, and exhibit good horizontal scalability. However, these systems are designed for tasks where the majority of graph vertices participate in computation, but are not suitable for processing light-workload graph queries where only a small fraction of vertices need to be accessed. The programming paradigm adopted by these systems can seriously under-utilize the resources in a cluster for graph query processing. In this work, we develop a new open-source system, called Quegel, for querying big graphs, which treats queries as first-class citizens in the design of its computing model. Users only need to specify the Pregel-like algorithm for a generic query, and Quegel processes light-workload graph queries on demand using a novel superstep-sharing execution model to effectively utilize the cluster resources. Quegel further provides a convenient interface for constructing graph indexes, which significantly improve query performance but are not supported by existing graph-parallel systems. Our experiments verified that Quegel is highly efficient in answering various types of graph queries and is up to orders of magnitude faster than existing systems.
2010.11685
Zilong Wang
Zilong Wang, Mingjie Zhan, Xuebo Liu, Ding Liang
DocStruct: A Multimodal Method to Extract Hierarchy Structure in Document for General Form Understanding
Accepted to EMNLP 2020 Findings
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Form understanding depends on both textual contents and organizational structure. Although modern OCR performs well, it is still challenging to realize general form understanding because forms are commonly used and of various formats. The table detection and handcrafted features in previous works cannot apply to all forms because of their requirements on formats. Therefore, we concentrate on the most elementary components, the key-value pairs, and adopt multimodal methods to extract features. We consider the form structure as a tree-like or graph-like hierarchy of text fragments. The parent-child relation corresponds to the key-value pairs in forms. We utilize the state-of-the-art models and design targeted extraction modules to extract multimodal features from semantic contents, layout information, and visual images. A hybrid fusion method of concatenation and feature shifting is designed to fuse the heterogeneous features and provide an informative joint representation. We adopt an asymmetric algorithm and negative sampling in our model as well. We validate our method on two benchmarks, MedForm and FUNSD, and extensive experiments demonstrate the effectiveness of our method.
[ { "created": "Thu, 15 Oct 2020 08:54:17 GMT", "version": "v1" } ]
2020-10-23
[ [ "Wang", "Zilong", "" ], [ "Zhan", "Mingjie", "" ], [ "Liu", "Xuebo", "" ], [ "Liang", "Ding", "" ] ]
Form understanding depends on both textual contents and organizational structure. Although modern OCR performs well, it is still challenging to realize general form understanding because forms are commonly used and of various formats. The table detection and handcrafted features in previous works cannot apply to all forms because of their requirements on formats. Therefore, we concentrate on the most elementary components, the key-value pairs, and adopt multimodal methods to extract features. We consider the form structure as a tree-like or graph-like hierarchy of text fragments. The parent-child relation corresponds to the key-value pairs in forms. We utilize the state-of-the-art models and design targeted extraction modules to extract multimodal features from semantic contents, layout information, and visual images. A hybrid fusion method of concatenation and feature shifting is designed to fuse the heterogeneous features and provide an informative joint representation. We adopt an asymmetric algorithm and negative sampling in our model as well. We validate our method on two benchmarks, MedForm and FUNSD, and extensive experiments demonstrate the effectiveness of our method.
2302.14261
Xueming Yan
Xueming Yan, Zhihang Fang, Yaochu Jin
Augmented Transformers with Adaptive n-grams Embedding for Multilingual Scene Text Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While vision transformers have been highly successful in improving the performance in image-based tasks, not much work has been reported on applying transformers to multilingual scene text recognition due to the complexities in the visual appearance of multilingual texts. To fill the gap, this paper proposes an augmented transformer architecture with n-grams embedding and cross-language rectification (TANGER). TANGER consists of a primary transformer with single patch embeddings of visual images, and a supplementary transformer with adaptive n-grams embeddings that aims to flexibly explore the potential correlations between neighbouring visual patches, which is essential for feature extraction from multilingual scene texts. Cross-language rectification is achieved with a loss function that takes into account both language identification and contextual coherence scoring. Extensive comparative studies are conducted on four widely used benchmark datasets as well as a new multilingual scene text dataset containing Indonesian, English, and Chinese collected from tourism scenes in Indonesia. Our experimental results demonstrate that TANGER is considerably better compared to the state-of-the-art, especially in handling complex multilingual scene texts.
[ { "created": "Tue, 28 Feb 2023 02:37:30 GMT", "version": "v1" } ]
2023-03-01
[ [ "Yan", "Xueming", "" ], [ "Fang", "Zhihang", "" ], [ "Jin", "Yaochu", "" ] ]
While vision transformers have been highly successful in improving the performance in image-based tasks, not much work has been reported on applying transformers to multilingual scene text recognition due to the complexities in the visual appearance of multilingual texts. To fill the gap, this paper proposes an augmented transformer architecture with n-grams embedding and cross-language rectification (TANGER). TANGER consists of a primary transformer with single patch embeddings of visual images, and a supplementary transformer with adaptive n-grams embeddings that aims to flexibly explore the potential correlations between neighbouring visual patches, which is essential for feature extraction from multilingual scene texts. Cross-language rectification is achieved with a loss function that takes into account both language identification and contextual coherence scoring. Extensive comparative studies are conducted on four widely used benchmark datasets as well as a new multilingual scene text dataset containing Indonesian, English, and Chinese collected from tourism scenes in Indonesia. Our experimental results demonstrate that TANGER is considerably better compared to the state-of-the-art, especially in handling complex multilingual scene texts.
2305.02697
Sabri Pllana
Julian Kunkel, Christian Boehme, Jonathan Decker, Fabrizio Magugliani, Dirk Pleiter, Bastian Koller, Karthee Sivalingam, Sabri Pllana, Alexander Nikolov, Mujdat Soyturk, Christian Racca, Andrea Bartolini, Adrian Tate, Berkay Yaman
DECICE: Device-Edge-Cloud Intelligent Collaboration Framework
null
null
null
null
cs.DC cs.AI
http://creativecommons.org/licenses/by/4.0/
DECICE is a Horizon Europe project that is developing an AI-enabled open and portable management framework for automatic and adaptive optimization and deployment of applications in computing continuum encompassing from IoT sensors on the Edge to large-scale Cloud / HPC computing infrastructures. In this paper, we describe the DECICE framework and architecture. Furthermore, we highlight use-cases for framework evaluation: intelligent traffic intersection, magnetic resonance imaging, and emergency response.
[ { "created": "Thu, 4 May 2023 10:11:14 GMT", "version": "v1" } ]
2023-05-05
[ [ "Kunkel", "Julian", "" ], [ "Boehme", "Christian", "" ], [ "Decker", "Jonathan", "" ], [ "Magugliani", "Fabrizio", "" ], [ "Pleiter", "Dirk", "" ], [ "Koller", "Bastian", "" ], [ "Sivalingam", "Karthee", "" ], [ "Pllana", "Sabri", "" ], [ "Nikolov", "Alexander", "" ], [ "Soyturk", "Mujdat", "" ], [ "Racca", "Christian", "" ], [ "Bartolini", "Andrea", "" ], [ "Tate", "Adrian", "" ], [ "Yaman", "Berkay", "" ] ]
DECICE is a Horizon Europe project that is developing an AI-enabled open and portable management framework for automatic and adaptive optimization and deployment of applications in computing continuum encompassing from IoT sensors on the Edge to large-scale Cloud / HPC computing infrastructures. In this paper, we describe the DECICE framework and architecture. Furthermore, we highlight use-cases for framework evaluation: intelligent traffic intersection, magnetic resonance imaging, and emergency response.
2301.06799
Md Sadik Awal
Md Sadik Awal, Christopher Thompson, Md Tauhidur Rahman
Utilization of Impedance Disparity Incurred from Switching Activities to Monitor and Characterize Firmware Activities
null
null
null
null
cs.CR eess.SP
http://creativecommons.org/licenses/by/4.0/
The massive trend toward embedded systems introduces new security threats to prevent. Malicious firmware makes it easier to launch cyberattacks against embedded systems. Systems infected with malicious firmware maintain the appearance of normal firmware operation but execute undesirable activities, which is usually a security risk. Traditionally, cybercriminals use malicious firmware to develop possible back-doors for future attacks. Due to the restricted resources of embedded systems, it is difficult to thwart these attacks using the majority of contemporary standard security protocols. In addition, monitoring the firmware operations using existing side channels from outside the processing unit, such as electromagnetic radiation, necessitates a complicated hardware configuration and in-depth technical understanding. In this paper, we propose a physical side channel that is formed by detecting the overall impedance changes induced by the firmware actions of a central processing unit. To demonstrate how this side channel can be exploited for detecting firmware activities, we experimentally validate it using impedance measurements to distinguish between distinct firmware operations with an accuracy of greater than 90%. These findings are the product of classifiers that are trained via machine learning. The implementation of our proposed methodology also leaves room for the use of hardware authentication.
[ { "created": "Tue, 17 Jan 2023 10:52:19 GMT", "version": "v1" } ]
2023-01-18
[ [ "Awal", "Md Sadik", "" ], [ "Thompson", "Christopher", "" ], [ "Rahman", "Md Tauhidur", "" ] ]
The massive trend toward embedded systems introduces new security threats to prevent. Malicious firmware makes it easier to launch cyberattacks against embedded systems. Systems infected with malicious firmware maintain the appearance of normal firmware operation but execute undesirable activities, which is usually a security risk. Traditionally, cybercriminals use malicious firmware to develop possible back-doors for future attacks. Due to the restricted resources of embedded systems, it is difficult to thwart these attacks using the majority of contemporary standard security protocols. In addition, monitoring the firmware operations using existing side channels from outside the processing unit, such as electromagnetic radiation, necessitates a complicated hardware configuration and in-depth technical understanding. In this paper, we propose a physical side channel that is formed by detecting the overall impedance changes induced by the firmware actions of a central processing unit. To demonstrate how this side channel can be exploited for detecting firmware activities, we experimentally validate it using impedance measurements to distinguish between distinct firmware operations with an accuracy of greater than 90%. These findings are the product of classifiers that are trained via machine learning. The implementation of our proposed methodology also leaves room for the use of hardware authentication.
2406.02822
Andre Schreiber
Andre Schreiber, Arun N. Sivakumar, Peter Du, Mateus V. Gasparino, Girish Chowdhary, Katherine Driggs-Campbell
W-RIZZ: A Weakly-Supervised Framework for Relative Traversability Estimation in Mobile Robotics
Accepted by RA-L. Code is available at https://github.com/andreschreiber/W-RIZZ
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Successful deployment of mobile robots in unstructured domains requires an understanding of the environment and terrain to avoid hazardous areas, getting stuck, and colliding with obstacles. Traversability estimation--which predicts where in the environment a robot can travel--is one prominent approach that tackles this problem. Existing geometric methods may ignore important semantic considerations, while semantic segmentation approaches involve a tedious labeling process. Recent self-supervised methods reduce labeling tedium, but require additional data or models and tend to struggle to explicitly label untraversable areas. To address these limitations, we introduce a weakly-supervised method for relative traversability estimation. Our method involves manually annotating the relative traversability of a small number of point pairs, which significantly reduces labeling effort compared to traditional segmentation-based methods and avoids the limitations of self-supervised methods. We further improve the performance of our method through a novel cross-image labeling strategy and loss function. We demonstrate the viability and performance of our method through deployment on a mobile robot in outdoor environments.
[ { "created": "Tue, 4 Jun 2024 23:46:15 GMT", "version": "v1" } ]
2024-06-06
[ [ "Schreiber", "Andre", "" ], [ "Sivakumar", "Arun N.", "" ], [ "Du", "Peter", "" ], [ "Gasparino", "Mateus V.", "" ], [ "Chowdhary", "Girish", "" ], [ "Driggs-Campbell", "Katherine", "" ] ]
Successful deployment of mobile robots in unstructured domains requires an understanding of the environment and terrain to avoid hazardous areas, getting stuck, and colliding with obstacles. Traversability estimation--which predicts where in the environment a robot can travel--is one prominent approach that tackles this problem. Existing geometric methods may ignore important semantic considerations, while semantic segmentation approaches involve a tedious labeling process. Recent self-supervised methods reduce labeling tedium, but require additional data or models and tend to struggle to explicitly label untraversable areas. To address these limitations, we introduce a weakly-supervised method for relative traversability estimation. Our method involves manually annotating the relative traversability of a small number of point pairs, which significantly reduces labeling effort compared to traditional segmentation-based methods and avoids the limitations of self-supervised methods. We further improve the performance of our method through a novel cross-image labeling strategy and loss function. We demonstrate the viability and performance of our method through deployment on a mobile robot in outdoor environments.
2002.11635
Manuel Kaspar
Manuel Kaspar, Juan David Munoz Osorio, J\"urgen Bock
Sim2Real Transfer for Reinforcement Learning without Dynamics Randomization
null
null
null
null
cs.AI cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we show how to use the Operational Space Control framework (OSC) under joint and cartesian constraints for reinforcement learning in cartesian space. Our method is therefore able to learn fast and with adjustable degrees of freedom, while we are able to transfer policies without additional dynamics randomizations on a KUKA LBR iiwa peg in-hole task. Before learning in simulation starts, we perform a system identification for aligning the simulation environment as far as possible with the dynamics of a real robot. Adding constraints to the OSC controller allows us to learn in a safe way on the real robot or to learn a flexible, goal conditioned policy that can be easily transferred from simulation to the real robot.
[ { "created": "Wed, 19 Feb 2020 11:10:21 GMT", "version": "v1" } ]
2020-02-27
[ [ "Kaspar", "Manuel", "" ], [ "Osorio", "Juan David Munoz", "" ], [ "Bock", "Jürgen", "" ] ]
In this work we show how to use the Operational Space Control framework (OSC) under joint and cartesian constraints for reinforcement learning in cartesian space. Our method is therefore able to learn fast and with adjustable degrees of freedom, while we are able to transfer policies without additional dynamics randomizations on a KUKA LBR iiwa peg in-hole task. Before learning in simulation starts, we perform a system identification for aligning the simulation environment as far as possible with the dynamics of a real robot. Adding constraints to the OSC controller allows us to learn in a safe way on the real robot or to learn a flexible, goal conditioned policy that can be easily transferred from simulation to the real robot.
2201.06248
Hossein Sadr
Fatemeh Mohades Deilami, Hossein Sadr, Mojdeh Nazari
Using Machine Learning Based Models for Personality Recognition
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Personality can be defined as the combination of behavior, emotion, motivation, and thoughts that aim at describing various aspects of human behavior based on a few stable and measurable characteristics. Considering the fact that our personality has a remarkable influence in our daily life, automatic recognition of a person's personality attributes can provide many essential practical applications in various aspects of cognitive science. deep learning based method for the task of personality recognition from text is proposed in this paper. Among various deep neural networks, Convolutional Neural Networks (CNN) have demonstrated profound efficiency in natural language processing and especially personality detection. Owing to the fact that various filter sizes in CNN may influence its performance, we decided to combine CNN with AdaBoost, a classical ensemble algorithm, to consider the possibility of using the contribution of various filter lengths and gasp their potential in the final classification via combining various classifiers with respective filter size using AdaBoost. Our proposed method was validated on the Essay dataset by conducting a series of experiments and the empirical results demonstrated the superiority of our proposed method compared to both machine learning and deep learning methods for the task of personality recognition.
[ { "created": "Mon, 17 Jan 2022 07:20:51 GMT", "version": "v1" } ]
2022-01-19
[ [ "Deilami", "Fatemeh Mohades", "" ], [ "Sadr", "Hossein", "" ], [ "Nazari", "Mojdeh", "" ] ]
Personality can be defined as the combination of behavior, emotion, motivation, and thoughts that aim at describing various aspects of human behavior based on a few stable and measurable characteristics. Considering the fact that our personality has a remarkable influence in our daily life, automatic recognition of a person's personality attributes can provide many essential practical applications in various aspects of cognitive science. deep learning based method for the task of personality recognition from text is proposed in this paper. Among various deep neural networks, Convolutional Neural Networks (CNN) have demonstrated profound efficiency in natural language processing and especially personality detection. Owing to the fact that various filter sizes in CNN may influence its performance, we decided to combine CNN with AdaBoost, a classical ensemble algorithm, to consider the possibility of using the contribution of various filter lengths and gasp their potential in the final classification via combining various classifiers with respective filter size using AdaBoost. Our proposed method was validated on the Essay dataset by conducting a series of experiments and the empirical results demonstrated the superiority of our proposed method compared to both machine learning and deep learning methods for the task of personality recognition.
2402.00089
Dr Peter J. Bentley
Soo Ling Lim, Peter J Bentley, Fuyuki Ishikawa
SCAPE: Searching Conceptual Architecture Prompts using Evolution
8 pages
IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence 2024), Yokohama, Japan
null
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/4.0/
Conceptual architecture involves a highly creative exploration of novel ideas, often taken from other disciplines as architects consider radical new forms, materials, textures and colors for buildings. While today's generative AI systems can produce remarkable results, they lack the creativity demonstrated for decades by evolutionary algorithms. SCAPE, our proposed tool, combines evolutionary search with generative AI, enabling users to explore creative and good quality designs inspired by their initial input through a simple point and click interface. SCAPE injects randomness into generative AI, and enables memory, making use of the built-in language skills of GPT-4 to vary prompts via text-based mutation and crossover. We demonstrate that compared to DALL-E 3, SCAPE enables a 67% improvement in image novelty, plus improvements in quality and effectiveness of use; we show that in just three iterations SCAPE has a 24% image novelty increase enabling effective exploration, plus optimization of images by users. We use more than 20 independent architects to assess SCAPE, who provide markedly positive feedback.
[ { "created": "Wed, 31 Jan 2024 10:25:45 GMT", "version": "v1" }, { "created": "Tue, 2 Apr 2024 10:05:33 GMT", "version": "v2" } ]
2024-04-03
[ [ "Lim", "Soo Ling", "" ], [ "Bentley", "Peter J", "" ], [ "Ishikawa", "Fuyuki", "" ] ]
Conceptual architecture involves a highly creative exploration of novel ideas, often taken from other disciplines as architects consider radical new forms, materials, textures and colors for buildings. While today's generative AI systems can produce remarkable results, they lack the creativity demonstrated for decades by evolutionary algorithms. SCAPE, our proposed tool, combines evolutionary search with generative AI, enabling users to explore creative and good quality designs inspired by their initial input through a simple point and click interface. SCAPE injects randomness into generative AI, and enables memory, making use of the built-in language skills of GPT-4 to vary prompts via text-based mutation and crossover. We demonstrate that compared to DALL-E 3, SCAPE enables a 67% improvement in image novelty, plus improvements in quality and effectiveness of use; we show that in just three iterations SCAPE has a 24% image novelty increase enabling effective exploration, plus optimization of images by users. We use more than 20 independent architects to assess SCAPE, who provide markedly positive feedback.
2209.14217
Xin Yu
Xin Yu, Yucheng Tang, Qi Yang, Ho Hin Lee, Riqiang Gao, Shunxing Bao, Ann Zenobia Moore, Luigi Ferrucci, Bennett A. Landman
Longitudinal Variability Analysis on Low-dose Abdominal CT with Deep Learning-based Segmentation
7 pages, 3 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC>0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.
[ { "created": "Wed, 28 Sep 2022 16:43:29 GMT", "version": "v1" } ]
2022-09-29
[ [ "Yu", "Xin", "" ], [ "Tang", "Yucheng", "" ], [ "Yang", "Qi", "" ], [ "Lee", "Ho Hin", "" ], [ "Gao", "Riqiang", "" ], [ "Bao", "Shunxing", "" ], [ "Moore", "Ann Zenobia", "" ], [ "Ferrucci", "Luigi", "" ], [ "Landman", "Bennett A.", "" ] ]
Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC>0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.
2203.03798
Eugene Belilovsky
Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
This has been withdrawn as it is a new version of arXiv:2104.05025
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Experience Replay (ER), where a small subset of past data is stored and replayed alongside new data, has emerged as a simple and effective learning strategy. In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones. We shed new light on this question by showing that applying ER causes the newly added classes' representations to overlap significantly with the previous classes, leading to highly disruptive parameter updates. Based on this empirical analysis, we propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes. We show that using an asymmetric update rule pushes new classes to adapt to the older ones (rather than the reverse), which is more effective especially at task boundaries, where much of the forgetting typically occurs. Empirical results show significant gains over strong baselines on standard continual learning benchmarks
[ { "created": "Tue, 8 Mar 2022 01:37:00 GMT", "version": "v1" }, { "created": "Thu, 21 Apr 2022 20:46:14 GMT", "version": "v2" }, { "created": "Mon, 25 Apr 2022 14:55:33 GMT", "version": "v3" } ]
2022-04-26
[ [ "Caccia", "Lucas", "" ], [ "Aljundi", "Rahaf", "" ], [ "Asadi", "Nader", "" ], [ "Tuytelaars", "Tinne", "" ], [ "Pineau", "Joelle", "" ], [ "Belilovsky", "Eugene", "" ] ]
In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Experience Replay (ER), where a small subset of past data is stored and replayed alongside new data, has emerged as a simple and effective learning strategy. In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones. We shed new light on this question by showing that applying ER causes the newly added classes' representations to overlap significantly with the previous classes, leading to highly disruptive parameter updates. Based on this empirical analysis, we propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes. We show that using an asymmetric update rule pushes new classes to adapt to the older ones (rather than the reverse), which is more effective especially at task boundaries, where much of the forgetting typically occurs. Empirical results show significant gains over strong baselines on standard continual learning benchmarks
1608.02339
Filippo Bonazzi
Elena Reshetova, Filippo Bonazzi, N. Asokan
SELint: an SEAndroid policy analysis tool
12 pages
Proceedings of the 3rd International Conference on Information Systems Security and Privacy - Volume 1, 2017, pages 47-58
10.5220/0006126600470058
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
SEAndroid enforcement is now mandatory for Android devices. In order to provide the desired level of security for their products, Android OEMs need to be able to minimize their mistakes in writing SEAndroid policies. However, existing SEAndroid and SELinux tools are not very useful for this purpose. It has been shown that SEAndroid policies found in commercially available devices for multiple manufacturers contain mistakes and redundancies. In this paper we present a new tool, SELint, which aims to help OEMs to produce better SEAndroid policies. SELint is extensible and configurable to suit the needs of different OEMs. It is provided with a default configuration based on the AOSP SEAndroid policy, but can be customized by OEMs.
[ { "created": "Mon, 8 Aug 2016 07:31:40 GMT", "version": "v1" }, { "created": "Thu, 6 Oct 2016 17:18:37 GMT", "version": "v2" }, { "created": "Mon, 13 Mar 2017 13:35:59 GMT", "version": "v3" } ]
2017-03-14
[ [ "Reshetova", "Elena", "" ], [ "Bonazzi", "Filippo", "" ], [ "Asokan", "N.", "" ] ]
SEAndroid enforcement is now mandatory for Android devices. In order to provide the desired level of security for their products, Android OEMs need to be able to minimize their mistakes in writing SEAndroid policies. However, existing SEAndroid and SELinux tools are not very useful for this purpose. It has been shown that SEAndroid policies found in commercially available devices for multiple manufacturers contain mistakes and redundancies. In this paper we present a new tool, SELint, which aims to help OEMs to produce better SEAndroid policies. SELint is extensible and configurable to suit the needs of different OEMs. It is provided with a default configuration based on the AOSP SEAndroid policy, but can be customized by OEMs.