id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.04199 | Vrunda Dave | V. Dave, E. Filiot, S. Krishna and N. Lhote | Synthesis of Computable Regular Functions of Infinite Words | null | Logical Methods in Computer Science, Volume 18, Issue 2 (June 29,
2022) lmcs:7592 | 10.46298/lmcs-18(2:23)2022 | null | cs.FL cs.LO | http://creativecommons.org/licenses/by/4.0/ | Regular functions from infinite words to infinite words can be equivalently
specified by MSO-transducers, streaming $\omega$-string transducers as well as
deterministic two-way transducers with look-ahead. In their one-way
restriction, the latter transducers define the class of rational functions.
Even though regular functions are robustly characterised by several
finite-state devices, even the subclass of rational functions may contain
functions which are not computable (by a Turing machine with infinite input).
This paper proposes a decision procedure for the following synthesis problem:
given a regular function $f$ (equivalently specified by one of the
aforementioned transducer model), is $f$ computable and if it is, synthesize a
Turing machine computing it.
For regular functions, we show that computability is equivalent to
continuity, and therefore the problem boils down to deciding continuity. We
establish a generic characterisation of continuity for functions preserving
regular languages under inverse image (such as regular functions). We exploit
this characterisation to show the decidability of continuity (and hence
computability) of rational and regular functions. For rational functions, we
show that this can be done in $\mathsf{NLogSpace}$ (it was already known to be
in $\mathsf{PTime}$ by Prieur). In a similar fashion, we also effectively
characterise uniform continuity of regular functions, and relate it to the
notion of uniform computability, which offers stronger efficiency guarantees.
| [
{
"created": "Wed, 15 May 2019 11:35:35 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Jun 2021 14:37:13 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Feb 2022 12:04:14 GMT",
"version": "v3"
},
{
"created": "Mon, 25 Apr 2022 17:13:29 GMT",
"version": "v4"
},
{
"created": "Tue, 28 Jun 2022 11:41:33 GMT",
"version": "v5"
}
] | 2023-06-22 | [
[
"Dave",
"V.",
""
],
[
"Filiot",
"E.",
""
],
[
"Krishna",
"S.",
""
],
[
"Lhote",
"N.",
""
]
] | Regular functions from infinite words to infinite words can be equivalently specified by MSO-transducers, streaming $\omega$-string transducers as well as deterministic two-way transducers with look-ahead. In their one-way restriction, the latter transducers define the class of rational functions. Even though regular functions are robustly characterised by several finite-state devices, even the subclass of rational functions may contain functions which are not computable (by a Turing machine with infinite input). This paper proposes a decision procedure for the following synthesis problem: given a regular function $f$ (equivalently specified by one of the aforementioned transducer model), is $f$ computable and if it is, synthesize a Turing machine computing it. For regular functions, we show that computability is equivalent to continuity, and therefore the problem boils down to deciding continuity. We establish a generic characterisation of continuity for functions preserving regular languages under inverse image (such as regular functions). We exploit this characterisation to show the decidability of continuity (and hence computability) of rational and regular functions. For rational functions, we show that this can be done in $\mathsf{NLogSpace}$ (it was already known to be in $\mathsf{PTime}$ by Prieur). In a similar fashion, we also effectively characterise uniform continuity of regular functions, and relate it to the notion of uniform computability, which offers stronger efficiency guarantees. |
1801.03553 | Mario Diaz | Mario Diaz, Shahab Asoodeh, Fady Alajaji, Tam\'as Linder, Serban
Belinschi and James Mingo | On the Noise-Information Separation of a Private Principal Component
Analysis Scheme | Submitted to the International Symposium on Information Theory (ISIT)
2018 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a survey disclosure model, we consider an additive noise privacy mechanism
and study the trade-off between privacy guarantees and statistical utility.
Privacy is approached from two different but complementary viewpoints:
information and estimation theoretic. Motivated by the performance of principal
component analysis, statistical utility is measured via the spectral gap of a
certain covariance matrix. This formulation and its motivation rely on
classical results from random matrix theory. We prove some properties of this
statistical utility function and discuss a simple numerical method to evaluate
it.
| [
{
"created": "Wed, 10 Jan 2018 21:10:54 GMT",
"version": "v1"
}
] | 2018-01-12 | [
[
"Diaz",
"Mario",
""
],
[
"Asoodeh",
"Shahab",
""
],
[
"Alajaji",
"Fady",
""
],
[
"Linder",
"Tamás",
""
],
[
"Belinschi",
"Serban",
""
],
[
"Mingo",
"James",
""
]
] | In a survey disclosure model, we consider an additive noise privacy mechanism and study the trade-off between privacy guarantees and statistical utility. Privacy is approached from two different but complementary viewpoints: information and estimation theoretic. Motivated by the performance of principal component analysis, statistical utility is measured via the spectral gap of a certain covariance matrix. This formulation and its motivation rely on classical results from random matrix theory. We prove some properties of this statistical utility function and discuss a simple numerical method to evaluate it. |
1211.2268 | Yun Kuen Cheung | Yun Kuen Cheung, Richard Cole, Ashish Rastogi | Tatonnement in Ongoing Markets of Complementary Goods | 44 pages, ACM EC 2012 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper continues the study, initiated by Cole and Fleischer, of the
behavior of a tatonnement price update rule in Ongoing Fisher Markets. The
prior work showed fast convergence toward an equilibrium when the goods
satisfied the weak gross substitutes property and had bounded demand and income
elasticities.
The current work shows that fast convergence also occurs for the following
types of markets:
- All pairs of goods are complements to each other, and - the demand and
income elasticities are suitably bounded.
In particular, these conditions hold when all buyers in the market are
equipped with CES utilities, where all the parameters $\rho$, one per buyer,
satisfy $-1 < \rho \le 0$.
In addition, we extend the above result to markets in which a mixture of
complements and substitutes occur. This includes characterizing a class of
nested CES utilities for which fast convergence holds.
An interesting technical contribution, which may be of independent interest,
is an amortized analysis for handling asynchronous events in settings in which
there are a mix of continuous changes and discrete events.
| [
{
"created": "Sat, 10 Nov 2012 00:24:54 GMT",
"version": "v1"
}
] | 2012-11-13 | [
[
"Cheung",
"Yun Kuen",
""
],
[
"Cole",
"Richard",
""
],
[
"Rastogi",
"Ashish",
""
]
] | This paper continues the study, initiated by Cole and Fleischer, of the behavior of a tatonnement price update rule in Ongoing Fisher Markets. The prior work showed fast convergence toward an equilibrium when the goods satisfied the weak gross substitutes property and had bounded demand and income elasticities. The current work shows that fast convergence also occurs for the following types of markets: - All pairs of goods are complements to each other, and - the demand and income elasticities are suitably bounded. In particular, these conditions hold when all buyers in the market are equipped with CES utilities, where all the parameters $\rho$, one per buyer, satisfy $-1 < \rho \le 0$. In addition, we extend the above result to markets in which a mixture of complements and substitutes occur. This includes characterizing a class of nested CES utilities for which fast convergence holds. An interesting technical contribution, which may be of independent interest, is an amortized analysis for handling asynchronous events in settings in which there are a mix of continuous changes and discrete events. |
1912.07476 | Bobak McCann | Bobak McCann, Kerry Fendick, Aaron David, Antonio DeSimone, Steven
Handy | Improving Conversation Quality for VoIP Through Block Erasure Coding | null | null | null | null | cs.IT eess.IV math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conversational quality of voice over IP (VoIP) depends on packet-loss
rates, burstiness of packet loss, and delays (or latencies). The benefits for
conversational voice quality of erasure coding attributable to its reduction in
packet loss rates are widely appreciated. When block erasure coding is used,
our analysis shows how those benefits are reduced or even eliminated by
increases in delays and in a measure of burstiness of packet loss. We
nevertheless show that the net effect of those three factors is still positive
over a wide range of network loss rates provided that block sizes are
sufficiently small and the sizes of decoding buffers have been optimized for
real-time media. To perform this analysis, we develop a new analytical model
describing the effects of block erasure coding on end-to-end network
performance.
| [
{
"created": "Mon, 16 Dec 2019 16:18:19 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"McCann",
"Bobak",
""
],
[
"Fendick",
"Kerry",
""
],
[
"David",
"Aaron",
""
],
[
"DeSimone",
"Antonio",
""
],
[
"Handy",
"Steven",
""
]
] | The conversational quality of voice over IP (VoIP) depends on packet-loss rates, burstiness of packet loss, and delays (or latencies). The benefits for conversational voice quality of erasure coding attributable to its reduction in packet loss rates are widely appreciated. When block erasure coding is used, our analysis shows how those benefits are reduced or even eliminated by increases in delays and in a measure of burstiness of packet loss. We nevertheless show that the net effect of those three factors is still positive over a wide range of network loss rates provided that block sizes are sufficiently small and the sizes of decoding buffers have been optimized for real-time media. To perform this analysis, we develop a new analytical model describing the effects of block erasure coding on end-to-end network performance. |
2307.05506 | Doksoo Lee | Doksoo Lee, Wei Wayne Chen, Liwei Wang, Yu-Chin Chan, Wei Chen | Data-Driven Design for Metamaterials and Multiscale Systems: A Review | null | null | 10.1002/adma.202305254 | null | cs.CE cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | Metamaterials are artificial materials designed to exhibit effective material
parameters that go beyond those found in nature. Composed of unit cells with
rich designability that are assembled into multiscale systems, they hold great
promise for realizing next-generation devices with exceptional, often exotic,
functionalities. However, the vast design space and intricate
structure-property relationships pose significant challenges in their design. A
compelling paradigm that could bring the full potential of metamaterials to
fruition is emerging: data-driven design. In this review, we provide a holistic
overview of this rapidly evolving field, emphasizing the general methodology
instead of specific domains and deployment contexts. We organize existing
research into data-driven modules, encompassing data acquisition, machine
learning-based unit cell design, and data-driven multiscale optimization. We
further categorize the approaches within each module based on shared
principles, analyze and compare strengths and applicability, explore
connections between different modules, and identify open research questions and
opportunities.
| [
{
"created": "Sat, 1 Jul 2023 22:36:40 GMT",
"version": "v1"
}
] | 2023-12-07 | [
[
"Lee",
"Doksoo",
""
],
[
"Chen",
"Wei Wayne",
""
],
[
"Wang",
"Liwei",
""
],
[
"Chan",
"Yu-Chin",
""
],
[
"Chen",
"Wei",
""
]
] | Metamaterials are artificial materials designed to exhibit effective material parameters that go beyond those found in nature. Composed of unit cells with rich designability that are assembled into multiscale systems, they hold great promise for realizing next-generation devices with exceptional, often exotic, functionalities. However, the vast design space and intricate structure-property relationships pose significant challenges in their design. A compelling paradigm that could bring the full potential of metamaterials to fruition is emerging: data-driven design. In this review, we provide a holistic overview of this rapidly evolving field, emphasizing the general methodology instead of specific domains and deployment contexts. We organize existing research into data-driven modules, encompassing data acquisition, machine learning-based unit cell design, and data-driven multiscale optimization. We further categorize the approaches within each module based on shared principles, analyze and compare strengths and applicability, explore connections between different modules, and identify open research questions and opportunities. |
1804.08355 | Hui Li | Hui Li, Xiao-Jun Wu | Multi-focus Image Fusion using dictionary learning and Low-Rank
Representation | 12 pages, 5 figures, 2 tables. The 9th International Conference on
Image and Graphics (ICIG 2017, Oral) | null | 10.1007/978-3-319-71607-7_59 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the representation learning, the low-rank representation (LRR) is one
of the hot research topics in many fields, especially in image processing and
pattern recognition. Although LRR can capture the global structure, the ability
of local structure preservation is limited because LRR lacks dictionary
learning. In this paper, we propose a novel multi-focus image fusion method
based on dictionary learning and LRR to get a better performance in both global
and local structure. Firstly, the source images are divided into several
patches by sliding window technique. Then, the patches are classified according
to the Histogram of Oriented Gradient (HOG) features. And the sub-dictionaries
of each class are learned by K-singular value decomposition (K-SVD) algorithm.
Secondly, a global dictionary is constructed by combining these
sub-dictionaries. Then, we use the global dictionary in LRR to obtain the LRR
coefficients vector for each patch. Finally, the l_1-norm and choose-max fuse
strategy for each coefficients vector is adopted to reconstruct fused image
from the fused LRR coefficients and the global dictionary. Experimental results
demonstrate that the proposed method can obtain state-of-the-art performance in
both qualitative and quantitative evaluations compared with serval classical
methods and novel methods.The Code of our fusion method is available at
https://github.com/hli1221/imagefusion_dllrr
| [
{
"created": "Mon, 23 Apr 2018 11:57:44 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2018 08:32:20 GMT",
"version": "v2"
}
] | 2018-12-19 | [
[
"Li",
"Hui",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | Among the representation learning, the low-rank representation (LRR) is one of the hot research topics in many fields, especially in image processing and pattern recognition. Although LRR can capture the global structure, the ability of local structure preservation is limited because LRR lacks dictionary learning. In this paper, we propose a novel multi-focus image fusion method based on dictionary learning and LRR to get a better performance in both global and local structure. Firstly, the source images are divided into several patches by sliding window technique. Then, the patches are classified according to the Histogram of Oriented Gradient (HOG) features. And the sub-dictionaries of each class are learned by K-singular value decomposition (K-SVD) algorithm. Secondly, a global dictionary is constructed by combining these sub-dictionaries. Then, we use the global dictionary in LRR to obtain the LRR coefficients vector for each patch. Finally, the l_1-norm and choose-max fuse strategy for each coefficients vector is adopted to reconstruct fused image from the fused LRR coefficients and the global dictionary. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance in both qualitative and quantitative evaluations compared with serval classical methods and novel methods.The Code of our fusion method is available at https://github.com/hli1221/imagefusion_dllrr |
1503.02009 | Sergio Consoli | Sergio Consoli, Jos\`e Andr\`es Moreno P\`erez, and Nenad Mladenovic | Towards an intelligent VNS heuristic for the k-labelled spanning forest
problem | 2 pages, Fifteenth International Conference on Computer Aided Systems
Theory (EUROCAST 2015), Las Palmas de Gran Canaria, Spain | Computer Aided Systems Theory, pages 79-80 (2015) | null | null | cs.OH cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a currently ongoing project, we investigate a new possibility for solving
the k-labelled spanning forest (kLSF) problem by an intelligent Variable
Neighbourhood Search (Int-VNS) metaheuristic. In the kLSF problem we are given
an undirected input graph G and an integer positive value k, and the aim is to
find a spanning forest of G having the minimum number of connected components
and the upper bound k on the number of labels to use. The problem is related to
the minimum labelling spanning tree (MLST) problem, whose goal is to get the
spanning tree of the input graph with the minimum number of labels, and has
several applications in the real world, where one aims to ensure connectivity
by means of homogeneous connections. The Int-VNS metaheuristic that we propose
for the kLSF problem is derived from the promising intelligent VNS strategy
recently proposed for the MLST problem, and integrates the basic VNS for the
kLSF problem with other complementary approaches from machine learning,
statistics and experimental algorithmics, in order to produce high-quality
performance and to completely automate the resulting strategy.
| [
{
"created": "Thu, 5 Mar 2015 14:10:19 GMT",
"version": "v1"
}
] | 2015-03-09 | [
[
"Consoli",
"Sergio",
""
],
[
"Pèrez",
"Josè Andrès Moreno",
""
],
[
"Mladenovic",
"Nenad",
""
]
] | In a currently ongoing project, we investigate a new possibility for solving the k-labelled spanning forest (kLSF) problem by an intelligent Variable Neighbourhood Search (Int-VNS) metaheuristic. In the kLSF problem we are given an undirected input graph G and an integer positive value k, and the aim is to find a spanning forest of G having the minimum number of connected components and the upper bound k on the number of labels to use. The problem is related to the minimum labelling spanning tree (MLST) problem, whose goal is to get the spanning tree of the input graph with the minimum number of labels, and has several applications in the real world, where one aims to ensure connectivity by means of homogeneous connections. The Int-VNS metaheuristic that we propose for the kLSF problem is derived from the promising intelligent VNS strategy recently proposed for the MLST problem, and integrates the basic VNS for the kLSF problem with other complementary approaches from machine learning, statistics and experimental algorithmics, in order to produce high-quality performance and to completely automate the resulting strategy. |
2312.14991 | Huiyan Qi | Yuehao Yin, Huiyan Qi, Bin Zhu, Jingjing Chen, Yu-Gang Jiang,
Chong-Wah Ngo | FoodLMM: A Versatile Food Assistant using Large Multi-modal Model | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Multi-modal Models (LMMs) have made impressive progress in many
vision-language tasks. Nevertheless, the performance of general LMMs in
specific domains is still far from satisfactory. This paper proposes FoodLMM, a
versatile food assistant based on LMMs with various capabilities, including
food recognition, ingredient recognition, recipe generation, nutrition
estimation, food segmentation and multi-round conversation. To facilitate
FoodLMM to deal with tasks beyond pure text output, we introduce a series of
novel task-specific tokens and heads, enabling the model to predict food
nutritional values and multiple segmentation masks. We adopt a two-stage
training strategy. In the first stage, we utilize multiple public food
benchmarks for multi-task learning by leveraging the instruct-following
paradigm. In the second stage, we construct a multi-round conversation dataset
and a reasoning segmentation dataset to fine-tune the model, enabling it to
conduct professional dialogues and generate segmentation masks based on complex
reasoning in the food domain. Our fine-tuned FoodLMM achieves state-of-the-art
results across several food benchmarks. We will make our code, models and
datasets publicly available.
| [
{
"created": "Fri, 22 Dec 2023 11:56:22 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 14:21:20 GMT",
"version": "v2"
}
] | 2024-04-15 | [
[
"Yin",
"Yuehao",
""
],
[
"Qi",
"Huiyan",
""
],
[
"Zhu",
"Bin",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Jiang",
"Yu-Gang",
""
],
[
"Ngo",
"Chong-Wah",
""
]
] | Large Multi-modal Models (LMMs) have made impressive progress in many vision-language tasks. Nevertheless, the performance of general LMMs in specific domains is still far from satisfactory. This paper proposes FoodLMM, a versatile food assistant based on LMMs with various capabilities, including food recognition, ingredient recognition, recipe generation, nutrition estimation, food segmentation and multi-round conversation. To facilitate FoodLMM to deal with tasks beyond pure text output, we introduce a series of novel task-specific tokens and heads, enabling the model to predict food nutritional values and multiple segmentation masks. We adopt a two-stage training strategy. In the first stage, we utilize multiple public food benchmarks for multi-task learning by leveraging the instruct-following paradigm. In the second stage, we construct a multi-round conversation dataset and a reasoning segmentation dataset to fine-tune the model, enabling it to conduct professional dialogues and generate segmentation masks based on complex reasoning in the food domain. Our fine-tuned FoodLMM achieves state-of-the-art results across several food benchmarks. We will make our code, models and datasets publicly available. |
1109.5083 | Vivek Nittoor | Vivek S Nittoor, Reiji Suda | A Mathematical Approach to Balanced Tanner Graph Enumeration | 8 pages - results in this paper have been superceded by new results | null | null | null | cs.IT math.CO math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper summarizes our latest understanding and results about the
application of the Mathematics Of Enumeration to Tanner Graphs that have a
regular structure called Balanced Tanner Graphs. Some preliminaries of
permutation groups have been presented followed by various enumeration
theorems, and finally our approach for enumeration of Balanced Tanner Graphs
has been explained, and several open questions have been raised.
| [
{
"created": "Fri, 23 Sep 2011 14:10:35 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Dec 2012 12:55:50 GMT",
"version": "v2"
}
] | 2013-01-01 | [
[
"Nittoor",
"Vivek S",
""
],
[
"Suda",
"Reiji",
""
]
] | This paper summarizes our latest understanding and results about the application of the Mathematics Of Enumeration to Tanner Graphs that have a regular structure called Balanced Tanner Graphs. Some preliminaries of permutation groups have been presented followed by various enumeration theorems, and finally our approach for enumeration of Balanced Tanner Graphs has been explained, and several open questions have been raised. |
1206.6859 | Kathryn Blackmond Laskey | Kathryn Blackmond Laskey, Ning Xu, Chun-Hung Chen | Propagation of Delays in the National Airspace System | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-265-272 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The National Airspace System (NAS) is a large and complex system with
thousands of interrelated components: administration, control centers,
airports, airlines, aircraft, passengers, etc. The complexity of the NAS
creates many difficulties in management and control. One of the most pressing
problems is flight delay. Delay creates high cost to airlines, complaints from
passengers, and difficulties for airport operations. As demand on the system
increases, the delay problem becomes more and more prominent. For this reason,
it is essential for the Federal Aviation Administration to understand the
causes of delay and to find ways to reduce delay. Major contributing factors to
delay are congestion at the origin airport, weather, increasing demand, and air
traffic management (ATM) decisions such as the Ground Delay Programs (GDP).
Delay is an inherently stochastic phenomenon. Even if all known causal factors
could be accounted for, macro-level national airspace system (NAS) delays could
not be predicted with certainty from micro-level aircraft information. This
paper presents a stochastic model that uses Bayesian Networks (BNs) to model
the relationships among different components of aircraft delay and the causal
factors that affect delays. A case study on delays of departure flights from
Chicago O'Hare international airport (ORD) to Hartsfield-Jackson Atlanta
International Airport (ATL) reveals how local and system level environmental
and human-caused factors combine to affect components of delay, and how these
components contribute to the final arrival delay at the destination airport.
| [
{
"created": "Wed, 27 Jun 2012 16:27:12 GMT",
"version": "v1"
}
] | 2012-07-02 | [
[
"Laskey",
"Kathryn Blackmond",
""
],
[
"Xu",
"Ning",
""
],
[
"Chen",
"Chun-Hung",
""
]
] | The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O'Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport. |
1903.06875 | Jean Louis Fendji Kedieng Ebongue | Jean Louis Ebongue Kedieng Fendji and Sidoine Djuissi Samo | Energy and performance evaluation of reactive, proactive, and hybrid
routing protocols in wireless mesh network | 19 pages | International Journal of Wireless & Mobile Networks (IJWMN) Vol.
11, No. 1, February 2019 | 10.5121/ijwmn.2019.11102 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper evaluates the energy consumption of well-known routing protocols,
along with other metrics such as throughput, packet delivery ratio (PDR), and
delay in different scenarios. We consider two other metrics in order to capture
the efficiency of the energy consumption: e-throughput which is the ratio
between the consumed energy and the throughput; and the e-PDR which is the
ratio between the consumed energy and the PDR. We compare four routing
protocols: AODV, OLSR, and HWMP in Reactive and Proactive modes. The number of
nodes is varying between 25 and 81 nodes, with different mobility models.
Simulations are conducted using NS3 and the parameters of a real network
interface card. From the results, AODV presents the lowest energy consumption
and a better e-Throughput. OLSR provides a better e-PDR in mobile scenarios.
With a smaller e-PDR and e-Throughput, the proactive mode of HWMP is more
energy efficient than the reactive mode.
| [
{
"created": "Sat, 16 Mar 2019 03:41:18 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Fendji",
"Jean Louis Ebongue Kedieng",
""
],
[
"Samo",
"Sidoine Djuissi",
""
]
] | This paper evaluates the energy consumption of well-known routing protocols, along with other metrics such as throughput, packet delivery ratio (PDR), and delay in different scenarios. We consider two other metrics in order to capture the efficiency of the energy consumption: e-throughput which is the ratio between the consumed energy and the throughput; and the e-PDR which is the ratio between the consumed energy and the PDR. We compare four routing protocols: AODV, OLSR, and HWMP in Reactive and Proactive modes. The number of nodes is varying between 25 and 81 nodes, with different mobility models. Simulations are conducted using NS3 and the parameters of a real network interface card. From the results, AODV presents the lowest energy consumption and a better e-Throughput. OLSR provides a better e-PDR in mobile scenarios. With a smaller e-PDR and e-Throughput, the proactive mode of HWMP is more energy efficient than the reactive mode. |
1507.04002 | J{\o}rgen Villadsen | J{\o}rgen Villadsen, Alexander Birch Jensen and Anders Schlichtkrull | NaDeA: A Natural Deduction Assistant with a Formalization in Isabelle | Proceedings of the Fourth International Conference on Tools for
Teaching Logic (TTL2015), Rennes, France, June 9-12, 2015. Editors: M.
Antonia Huertas, Jo\~ao Marcos, Mar\'ia Manzano, Sophie Pinchinat,
Fran\c{c}ois Schwarzentruber | null | null | null | cs.CY cs.LO | http://creativecommons.org/licenses/by/4.0/ | We present a new software tool for teaching logic based on natural deduction.
Its proof system is formalized in the proof assistant Isabelle such that its
definition is very precise. Soundness of the formalization has been proved in
Isabelle. The tool is open source software developed in TypeScript / JavaScript
and can thus be used directly in a browser without any further installation.
Although developed for undergraduate computer science students who are used to
study and program concrete computer code in a programming language we consider
the approach relevant for a broader audience and for other proof systems as
well.
| [
{
"created": "Tue, 14 Jul 2015 20:02:30 GMT",
"version": "v1"
}
] | 2015-07-16 | [
[
"Villadsen",
"Jørgen",
""
],
[
"Jensen",
"Alexander Birch",
""
],
[
"Schlichtkrull",
"Anders",
""
]
] | We present a new software tool for teaching logic based on natural deduction. Its proof system is formalized in the proof assistant Isabelle such that its definition is very precise. Soundness of the formalization has been proved in Isabelle. The tool is open source software developed in TypeScript / JavaScript and can thus be used directly in a browser without any further installation. Although developed for undergraduate computer science students who are used to study and program concrete computer code in a programming language we consider the approach relevant for a broader audience and for other proof systems as well. |
1702.06144 | Uri Erez | Ran Hadad, Uri Erez, and Yaming Yu | An Inequality for the Correlation of Two Functions Operating on
Symmetric Bivariate Normal Variables | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An inequality is derived for the correlation of two univariate functions
operating on symmetric bivariate normal random variables. The inequality is a
simple consequence of the Cauchy-Schwarz inequality.
| [
{
"created": "Mon, 20 Feb 2017 19:11:10 GMT",
"version": "v1"
}
] | 2017-02-22 | [
[
"Hadad",
"Ran",
""
],
[
"Erez",
"Uri",
""
],
[
"Yu",
"Yaming",
""
]
] | An inequality is derived for the correlation of two univariate functions operating on symmetric bivariate normal random variables. The inequality is a simple consequence of the Cauchy-Schwarz inequality. |
2203.09017 | Bo Liu | Bo Liu, Qiulei Dong, Zhanyi Hu | Semantic-diversity transfer network for generalized zero-shot learning
via inner disagreement based OOD detector | 35 pages, 6 figures | Knowledge based System, 2021 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Zero-shot learning (ZSL) aims to recognize objects from unseen classes, where
the kernel problem is to transfer knowledge from seen classes to unseen classes
by establishing appropriate mappings between visual and semantic features. The
knowledge transfer in many existing works is limited mainly due to the facts
that 1) the widely used visual features are global ones but not totally
consistent with semantic attributes; 2) only one mapping is learned in existing
works, which is not able to effectively model diverse visual-semantic
relations; 3) the bias problem in the generalized ZSL (GZSL) could not be
effectively handled. In this paper, we propose two techniques to alleviate
these limitations. Firstly, we propose a Semantic-diversity transfer Network
(SetNet) addressing the first two limitations, where 1) a multiple-attention
architecture and a diversity regularizer are proposed to learn multiple local
visual features that are more consistent with semantic attributes and 2) a
projector ensemble that geometrically takes diverse local features as inputs is
proposed to model visual-semantic relations from diverse local perspectives.
Secondly, we propose an inner disagreement based domain detection module (ID3M)
for GZSL to alleviate the third limitation, which picks out unseen-class data
before class-level classification. Due to the absence of unseen-class data in
training stage, ID3M employs a novel self-contained training scheme and detects
out unseen-class data based on a designed inner disagreement criterion.
Experimental results on three public datasets demonstrate that the proposed
SetNet with the explored ID3M achieves a significant improvement against $30$
state-of-the-art methods.
| [
{
"created": "Thu, 17 Mar 2022 01:31:27 GMT",
"version": "v1"
}
] | 2022-03-18 | [
[
"Liu",
"Bo",
""
],
[
"Dong",
"Qiulei",
""
],
[
"Hu",
"Zhanyi",
""
]
] | Zero-shot learning (ZSL) aims to recognize objects from unseen classes, where the kernel problem is to transfer knowledge from seen classes to unseen classes by establishing appropriate mappings between visual and semantic features. The knowledge transfer in many existing works is limited mainly due to the facts that 1) the widely used visual features are global ones but not totally consistent with semantic attributes; 2) only one mapping is learned in existing works, which is not able to effectively model diverse visual-semantic relations; 3) the bias problem in the generalized ZSL (GZSL) could not be effectively handled. In this paper, we propose two techniques to alleviate these limitations. Firstly, we propose a Semantic-diversity transfer Network (SetNet) addressing the first two limitations, where 1) a multiple-attention architecture and a diversity regularizer are proposed to learn multiple local visual features that are more consistent with semantic attributes and 2) a projector ensemble that geometrically takes diverse local features as inputs is proposed to model visual-semantic relations from diverse local perspectives. Secondly, we propose an inner disagreement based domain detection module (ID3M) for GZSL to alleviate the third limitation, which picks out unseen-class data before class-level classification. Due to the absence of unseen-class data in training stage, ID3M employs a novel self-contained training scheme and detects out unseen-class data based on a designed inner disagreement criterion. Experimental results on three public datasets demonstrate that the proposed SetNet with the explored ID3M achieves a significant improvement against $30$ state-of-the-art methods. |
2108.07401 | Shengcheng Yu | Shengcheng Yu, Chunrong Fang, Quanjun Zhang, Zhihao Cao, Yexiao Yun,
Zhenfei Cao, Kai Mei, Zhenyu Chen | Mobile App Crowdsourced Test Report Consistency Detection via Deep
Image-and-Text Fusion Understanding | null | null | 10.1109/TSE.2023.3285787 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourced testing, as a distinct testing paradigm, has attracted much
attention in software testing, especially in mobile application (app) testing
field. Compared with in-house testing, crowdsourced testing shows superiority
with the diverse testing environments when faced with the mobile testing
fragmentation problem. However, crowdsourced testing also encounters the
low-quality test report problem caused by unprofessional crowdworkers involved
with different expertise. In order to handle the submitted reports of uneven
quality, app developers have to distinguish high-quality reports from
low-quality ones to help the bug inspection. One kind of typical low-quality
test report is inconsistent test reports, which means the textual descriptions
are not focusing on the attached bug-occurring screenshots. According to our
empirical survey, only 18.07% crowdsourced test reports are consistent.
Inconsistent reports cause waste on mobile app testing.
To solve the inconsistency problem, we propose ReCoDe to detect the
consistency of crowdsourced test reports via deep image-and-text fusion
understanding. ReCoDe is a two-stage approach that first classifies the reports
based on textual descriptions into different categories according to the bug
feature. In the second stage, ReCoDe has a deep understanding of the GUI image
features of the app screenshots and then applies different strategies to handle
different types of bugs to detect the consistency of the crowdsourced test
reports. We conduct an experiment on a dataset with over 22k test reports to
evaluate ReCoDe, and the results show the effectiveness of ReCoDe in detecting
the consistency of crowdsourced test reports. Besides, a user study is
conducted to prove the practical value of ReCoDe in effectively helping app
developers improve the efficiency of reviewing the crowdsourced test reports.
| [
{
"created": "Tue, 17 Aug 2021 02:02:56 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Aug 2021 02:22:06 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jun 2023 08:26:44 GMT",
"version": "v3"
}
] | 2023-06-13 | [
[
"Yu",
"Shengcheng",
""
],
[
"Fang",
"Chunrong",
""
],
[
"Zhang",
"Quanjun",
""
],
[
"Cao",
"Zhihao",
""
],
[
"Yun",
"Yexiao",
""
],
[
"Cao",
"Zhenfei",
""
],
[
"Mei",
"Kai",
""
],
[
"Chen",
"Zhenyu",
""
]
] | Crowdsourced testing, as a distinct testing paradigm, has attracted much attention in software testing, especially in mobile application (app) testing field. Compared with in-house testing, crowdsourced testing shows superiority with the diverse testing environments when faced with the mobile testing fragmentation problem. However, crowdsourced testing also encounters the low-quality test report problem caused by unprofessional crowdworkers involved with different expertise. In order to handle the submitted reports of uneven quality, app developers have to distinguish high-quality reports from low-quality ones to help the bug inspection. One kind of typical low-quality test report is inconsistent test reports, which means the textual descriptions are not focusing on the attached bug-occurring screenshots. According to our empirical survey, only 18.07% crowdsourced test reports are consistent. Inconsistent reports cause waste on mobile app testing. To solve the inconsistency problem, we propose ReCoDe to detect the consistency of crowdsourced test reports via deep image-and-text fusion understanding. ReCoDe is a two-stage approach that first classifies the reports based on textual descriptions into different categories according to the bug feature. In the second stage, ReCoDe has a deep understanding of the GUI image features of the app screenshots and then applies different strategies to handle different types of bugs to detect the consistency of the crowdsourced test reports. We conduct an experiment on a dataset with over 22k test reports to evaluate ReCoDe, and the results show the effectiveness of ReCoDe in detecting the consistency of crowdsourced test reports. Besides, a user study is conducted to prove the practical value of ReCoDe in effectively helping app developers improve the efficiency of reviewing the crowdsourced test reports. |
2405.01115 | Hongliang Zhang | Hongliang Zhang, Yilan Zhou, Lei Wang and Tengchao Huang | A New Self-Alignment Method without Solving Wahba Problem for SINS in
Autonomous Vehicles | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Initial alignment is one of the key technologies in strapdown inertial
navigation system (SINS) to provide initial state information for vehicle
attitude and navigation. For some situations, such as the attitude heading
reference system, the position is not necessarily required or even available,
then the self-alignment that does not rely on any external aid becomes very
necessary. This study presents a new self-alignment method under swaying
conditions, which can determine the latitude and attitude simultaneously by
utilizing all observation vectors without solving the Wahba problem, and it is
different from the existing methods. By constructing the dyadic tensor of each
observation and reference vector itself, all equations related to observation
and reference vectors are accumulated into one equation, where the latitude
variable is extracted and solved according to the same eigenvalues of similar
matrices on both sides of the equation, meanwhile the attitude is obtained by
eigenvalue decomposition. Simulation and experiment tests verify the
effectiveness of the proposed methods, and the alignment result is better than
TRIAD in convergence speed and stability and comparable with OBA method in
alignment accuracy with or without latitude. It is useful for guiding the
design of initial alignment in autonomous vehicle applications.
| [
{
"created": "Thu, 2 May 2024 09:23:37 GMT",
"version": "v1"
}
] | 2024-05-03 | [
[
"Zhang",
"Hongliang",
""
],
[
"Zhou",
"Yilan",
""
],
[
"Wang",
"Lei",
""
],
[
"Huang",
"Tengchao",
""
]
] | Initial alignment is one of the key technologies in strapdown inertial navigation system (SINS) to provide initial state information for vehicle attitude and navigation. For some situations, such as the attitude heading reference system, the position is not necessarily required or even available, then the self-alignment that does not rely on any external aid becomes very necessary. This study presents a new self-alignment method under swaying conditions, which can determine the latitude and attitude simultaneously by utilizing all observation vectors without solving the Wahba problem, and it is different from the existing methods. By constructing the dyadic tensor of each observation and reference vector itself, all equations related to observation and reference vectors are accumulated into one equation, where the latitude variable is extracted and solved according to the same eigenvalues of similar matrices on both sides of the equation, meanwhile the attitude is obtained by eigenvalue decomposition. Simulation and experiment tests verify the effectiveness of the proposed methods, and the alignment result is better than TRIAD in convergence speed and stability and comparable with OBA method in alignment accuracy with or without latitude. It is useful for guiding the design of initial alignment in autonomous vehicle applications. |
2311.02715 | Arun Verma | Arun Verma, Zhongxiang Dai, Yao Shu, Bryan Kian Hsiang Low | Exploiting Correlated Auxiliary Feedback in Parameterized Bandits | Accepted to NeurIPS 2023 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We study a novel variant of the parameterized bandits problem in which the
learner can observe additional auxiliary feedback that is correlated with the
observed reward. The auxiliary feedback is readily available in many real-life
applications, e.g., an online platform that wants to recommend the best-rated
services to its users can observe the user's rating of service (rewards) and
collect additional information like service delivery time (auxiliary feedback).
In this paper, we first develop a method that exploits auxiliary feedback to
build a reward estimator with tight confidence bounds, leading to a smaller
regret. We then characterize the regret reduction in terms of the correlation
coefficient between reward and its auxiliary feedback. Experimental results in
different settings also verify the performance gain achieved by our proposed
method.
| [
{
"created": "Sun, 5 Nov 2023 17:27:06 GMT",
"version": "v1"
}
] | 2023-11-07 | [
[
"Verma",
"Arun",
""
],
[
"Dai",
"Zhongxiang",
""
],
[
"Shu",
"Yao",
""
],
[
"Low",
"Bryan Kian Hsiang",
""
]
] | We study a novel variant of the parameterized bandits problem in which the learner can observe additional auxiliary feedback that is correlated with the observed reward. The auxiliary feedback is readily available in many real-life applications, e.g., an online platform that wants to recommend the best-rated services to its users can observe the user's rating of service (rewards) and collect additional information like service delivery time (auxiliary feedback). In this paper, we first develop a method that exploits auxiliary feedback to build a reward estimator with tight confidence bounds, leading to a smaller regret. We then characterize the regret reduction in terms of the correlation coefficient between reward and its auxiliary feedback. Experimental results in different settings also verify the performance gain achieved by our proposed method. |
2211.03206 | Maria Serna | Josep D\'iaz, \"Oznur Ya\c{s}ar Diner, Maria Serna, Oriol Serra | On Vertex Bisection Width of Random $d$-Regular Graphs | 31 pages, 2 figures | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Vertex bisection is a graph partitioning problem in which the aim is to find
a partition into two equal parts that minimizes the number of vertices in one
partition set that have a neighbor in the other set. We are interested in
giving upper bounds on the vertex bisection width of random $d$-regular graphs
for constant values of $d$. Our approach is based on analyzing a greedy
algorithm by using the Differential Equations Method. In this way, we obtain
the first known upper bounds for the vertex bisection width in random regular
graphs. The results are compared with experimental ones and with lower bounds
obtained by Kolesnik and Wormald, (Lower Bounds for the Isoperimetric Numbers
of Random Regular Graphs, SIAM J. on Disc. Math. 28(1), 553-575, 2014).
| [
{
"created": "Sun, 6 Nov 2022 19:17:04 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Díaz",
"Josep",
""
],
[
"Diner",
"Öznur Yaşar",
""
],
[
"Serna",
"Maria",
""
],
[
"Serra",
"Oriol",
""
]
] | Vertex bisection is a graph partitioning problem in which the aim is to find a partition into two equal parts that minimizes the number of vertices in one partition set that have a neighbor in the other set. We are interested in giving upper bounds on the vertex bisection width of random $d$-regular graphs for constant values of $d$. Our approach is based on analyzing a greedy algorithm by using the Differential Equations Method. In this way, we obtain the first known upper bounds for the vertex bisection width in random regular graphs. The results are compared with experimental ones and with lower bounds obtained by Kolesnik and Wormald, (Lower Bounds for the Isoperimetric Numbers of Random Regular Graphs, SIAM J. on Disc. Math. 28(1), 553-575, 2014). |
2003.07945 | Yang Hu | Soroush Bateni, Zhendong Wang, Yuankun Zhu, Yang Hu, Cong Liu | Co-Optimizing Performance and Memory FootprintVia Integrated CPU/GPU
Memory Management, anImplementation on Autonomous Driving Platform | null | null | null | null | cs.DC cs.OS | http://creativecommons.org/licenses/by/4.0/ | Cutting-edge embedded system applications, such as self-driving cars and
unmanned drone software, are reliant on integrated CPU/GPU platforms for their
DNNs-driven workload, such as perception and other highly parallel components.
In this work, we set out to explore the hidden performance implication of GPU
memory management methods of integrated CPU/GPU architecture. Through a series
of experiments on micro-benchmarks and real-world workloads, we find that the
performance under different memory management methods may vary according to
application characteristics. Based on this observation, we develop a
performance model that can predict system overhead for each memory management
method based on application characteristics. Guided by the performance model,
we further propose a runtime scheduler. By conducting per-task memory
management policy switching and kernel overlapping, the scheduler can
significantly relieve the system memory pressure and reduce the multitasking
co-run response time. We have implemented and extensively evaluated our system
prototype on the NVIDIA Jetson TX2, Drive PX2, and Xavier AGX platforms, using
both Rodinia benchmark suite and two real-world case studies of drone software
and autonomous driving software.
| [
{
"created": "Tue, 17 Mar 2020 21:09:24 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Mar 2020 02:43:49 GMT",
"version": "v2"
}
] | 2020-03-20 | [
[
"Bateni",
"Soroush",
""
],
[
"Wang",
"Zhendong",
""
],
[
"Zhu",
"Yuankun",
""
],
[
"Hu",
"Yang",
""
],
[
"Liu",
"Cong",
""
]
] | Cutting-edge embedded system applications, such as self-driving cars and unmanned drone software, are reliant on integrated CPU/GPU platforms for their DNNs-driven workload, such as perception and other highly parallel components. In this work, we set out to explore the hidden performance implication of GPU memory management methods of integrated CPU/GPU architecture. Through a series of experiments on micro-benchmarks and real-world workloads, we find that the performance under different memory management methods may vary according to application characteristics. Based on this observation, we develop a performance model that can predict system overhead for each memory management method based on application characteristics. Guided by the performance model, we further propose a runtime scheduler. By conducting per-task memory management policy switching and kernel overlapping, the scheduler can significantly relieve the system memory pressure and reduce the multitasking co-run response time. We have implemented and extensively evaluated our system prototype on the NVIDIA Jetson TX2, Drive PX2, and Xavier AGX platforms, using both Rodinia benchmark suite and two real-world case studies of drone software and autonomous driving software. |
2303.01957 | Bireswar Das | Bireswar Das, Anant Kumar, Shivdutt Sharma, Dhara Thakkar | Linear Space Data Structures for Finite Groups with Constant Query-time | A preliminary version of this article appeared in the pro- ceedings
of the 39th International Symposium on Theoretical Aspects of Computer
Science (STACS 2022) | null | null | null | cs.DS cs.DM math.CO math.GR | http://creativecommons.org/licenses/by/4.0/ | A finite group of order $n$ can be represented by its Cayley table. In the
word-RAM model the Cayley table of a group of order $n$ can be stored using
$O(n^2)$ words and can be used to answer a multiplication query in constant
time. It is interesting to ask if we can design a data structure to store a
group of order $n$ that uses $o(n^2)$ space but can still answer a
multiplication query in constant time.
We design a constant query-time data structure that can store any finite
group using $O(n)$ words where $n$ is the order of the group.
Farzan and Munro (ISSAC 2006) gave an information theoretic lower bound of
$\Omega(n)$ on the number of words to store a group of order $n$. Since our
data structure achieves this lower bound and answers queries in constant time,
it is optimal in both space usage and query-time.
A crucial step in the process is essentially to design linear space and
constant query-time data structures for nonabelian simple groups. The data
structures for nonableian simple groups are designed using a lemma that we
prove using the Classification Theorem for Finite Simple Groups (CFSG).
| [
{
"created": "Fri, 3 Mar 2023 14:30:48 GMT",
"version": "v1"
}
] | 2023-03-06 | [
[
"Das",
"Bireswar",
""
],
[
"Kumar",
"Anant",
""
],
[
"Sharma",
"Shivdutt",
""
],
[
"Thakkar",
"Dhara",
""
]
] | A finite group of order $n$ can be represented by its Cayley table. In the word-RAM model the Cayley table of a group of order $n$ can be stored using $O(n^2)$ words and can be used to answer a multiplication query in constant time. It is interesting to ask if we can design a data structure to store a group of order $n$ that uses $o(n^2)$ space but can still answer a multiplication query in constant time. We design a constant query-time data structure that can store any finite group using $O(n)$ words where $n$ is the order of the group. Farzan and Munro (ISSAC 2006) gave an information theoretic lower bound of $\Omega(n)$ on the number of words to store a group of order $n$. Since our data structure achieves this lower bound and answers queries in constant time, it is optimal in both space usage and query-time. A crucial step in the process is essentially to design linear space and constant query-time data structures for nonabelian simple groups. The data structures for nonableian simple groups are designed using a lemma that we prove using the Classification Theorem for Finite Simple Groups (CFSG). |
1907.12009 | Jun Gao | Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang and Tie-Yan Liu | Representation Degeneration Problem in Training Natural Language
Generation Models | ICLR 2019 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We study an interesting problem in training neural network-based models for
natural language generation tasks, which we call the \emph{representation
degeneration problem}. We observe that when training a model for natural
language generation tasks through likelihood maximization with the weight tying
trick, especially with big training datasets, most of the learnt word
embeddings tend to degenerate and be distributed into a narrow cone, which
largely limits the representation power of word embeddings. We analyze the
conditions and causes of this problem and propose a novel regularization method
to address it. Experiments on language modeling and machine translation show
that our method can largely mitigate the representation degeneration problem
and achieve better performance than baseline algorithms.
| [
{
"created": "Sun, 28 Jul 2019 03:57:41 GMT",
"version": "v1"
}
] | 2019-07-30 | [
[
"Gao",
"Jun",
""
],
[
"He",
"Di",
""
],
[
"Tan",
"Xu",
""
],
[
"Qin",
"Tao",
""
],
[
"Wang",
"Liwei",
""
],
[
"Liu",
"Tie-Yan",
""
]
] | We study an interesting problem in training neural network-based models for natural language generation tasks, which we call the \emph{representation degeneration problem}. We observe that when training a model for natural language generation tasks through likelihood maximization with the weight tying trick, especially with big training datasets, most of the learnt word embeddings tend to degenerate and be distributed into a narrow cone, which largely limits the representation power of word embeddings. We analyze the conditions and causes of this problem and propose a novel regularization method to address it. Experiments on language modeling and machine translation show that our method can largely mitigate the representation degeneration problem and achieve better performance than baseline algorithms. |
2002.11660 | Nicolas Maudet | Aur\'elie Beynier and Nicolas Maudet and Simon Rey and Parham Shams | An Optimal Procedure to Check Pareto-Optimality in House Markets with
Single-Peaked Preferences | Was initially part of our submission arXiv:1906.10250. We followed
recommendations to make a distinct contribution with this material | null | null | null | cs.GT cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the problem of allocating one resource per agent with initial
endowments (house markets) has seen a renewed interest: indeed, while in the
domain of strict preferences the Top Trading Cycle algorithm is known to be the
only procedure guaranteeing Pareto-optimality, individual rationality, and
strategy proofness. However, the situation differs in the single-peaked domain.
Indeed, Bade presented the Crawler, an alternative procedure enjoying the same
properties, with the additional advantage of being implementable in obviously
dominant strategies. In this paper we further investigate the Crawler and
propose the Diver, a variant which checks optimally whether an allocation is
Pareto-optimal for single-peaked preferences, thus improving over known
techniques used for checking Pareto-optimality in more general domains. We also
prove that the Diver is asymptotically optimal in terms of communication
complexity.
| [
{
"created": "Fri, 14 Feb 2020 17:24:55 GMT",
"version": "v1"
}
] | 2020-02-27 | [
[
"Beynier",
"Aurélie",
""
],
[
"Maudet",
"Nicolas",
""
],
[
"Rey",
"Simon",
""
],
[
"Shams",
"Parham",
""
]
] | Recently, the problem of allocating one resource per agent with initial endowments (house markets) has seen a renewed interest: indeed, while in the domain of strict preferences the Top Trading Cycle algorithm is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness. However, the situation differs in the single-peaked domain. Indeed, Bade presented the Crawler, an alternative procedure enjoying the same properties, with the additional advantage of being implementable in obviously dominant strategies. In this paper we further investigate the Crawler and propose the Diver, a variant which checks optimally whether an allocation is Pareto-optimal for single-peaked preferences, thus improving over known techniques used for checking Pareto-optimality in more general domains. We also prove that the Diver is asymptotically optimal in terms of communication complexity. |
2011.08988 | Yaroslava Lochman | Yaroslava Lochman, Oles Dobosevych, Rostyslav Hryniv, James Pritts | Minimal Solvers for Single-View Lens-Distorted Camera Auto-Calibration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes minimal solvers that use combinations of imaged
translational symmetries and parallel scene lines to jointly estimate lens
undistortion with either affine rectification or focal length and absolute
orientation. We use constraints provided by orthogonal scene planes to recover
the focal length. We show that solvers using feature combinations can recover
more accurate calibrations than solvers using only one feature type on scenes
that have a balance of lines and texture. We also show that the proposed
solvers are complementary and can be used together in a RANSAC-based estimator
to improve auto-calibration accuracy. State-of-the-art performance is
demonstrated on a standard dataset of lens-distorted urban images. The code is
available at https://github.com/ylochman/single-view-autocalib.
| [
{
"created": "Tue, 17 Nov 2020 22:32:17 GMT",
"version": "v1"
}
] | 2020-11-19 | [
[
"Lochman",
"Yaroslava",
""
],
[
"Dobosevych",
"Oles",
""
],
[
"Hryniv",
"Rostyslav",
""
],
[
"Pritts",
"James",
""
]
] | This paper proposes minimal solvers that use combinations of imaged translational symmetries and parallel scene lines to jointly estimate lens undistortion with either affine rectification or focal length and absolute orientation. We use constraints provided by orthogonal scene planes to recover the focal length. We show that solvers using feature combinations can recover more accurate calibrations than solvers using only one feature type on scenes that have a balance of lines and texture. We also show that the proposed solvers are complementary and can be used together in a RANSAC-based estimator to improve auto-calibration accuracy. State-of-the-art performance is demonstrated on a standard dataset of lens-distorted urban images. The code is available at https://github.com/ylochman/single-view-autocalib. |
1806.04042 | V\'ictor Mayoral Vilches | V\'ictor Mayoral Vilches, Laura Alzola Kirschgens, Asier Bilbao Calvo,
Alejandro Hern\'andez Cordero, Rodrigo Izquierdo Pis\'on, David Mayoral
Vilches, Aday Mu\~niz Rosas, Gorka Olalde Mendia, Lander Usategi San Juan,
Irati Zamalloa Ugarte, Endika Gil-Uriarte, Erik Tews and Andreas Peter | Introducing the Robot Security Framework (RSF), a standardized
methodology to perform security assessments in robotics | Complementary code available at https://github.com/aliasrobotics/RSF | null | null | null | cs.CR cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots have gained relevance in society, increasingly performing critical
tasks. Nonetheless, robot security is being underestimated. Robotics security
is a complex landscape, which often requires a cross-disciplinar perspective to
which classical security lags behind. To address this issue, we present the
Robot Security Framework (RSF), a methodology to perform systematic security
assessments in robots. We propose, adapt and develop specific terminology and
provide guidelines to enable a holistic security assessment following four main
layers (Physical, Network, Firmware and Application). We argue that modern
robotics should regard as equally relevant internal and external communication
security. Finally, we advocate against "security by obscurity". We conclude
that the field of security in robotics deserves further research efforts.
| [
{
"created": "Mon, 11 Jun 2018 15:06:30 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Sep 2019 11:23:16 GMT",
"version": "v2"
},
{
"created": "Sat, 21 Sep 2019 06:56:53 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Nov 2021 15:53:41 GMT",
"version": "v4"
}
] | 2021-11-16 | [
[
"Vilches",
"Víctor Mayoral",
""
],
[
"Kirschgens",
"Laura Alzola",
""
],
[
"Calvo",
"Asier Bilbao",
""
],
[
"Cordero",
"Alejandro Hernández",
""
],
[
"Pisón",
"Rodrigo Izquierdo",
""
],
[
"Vilches",
"David Mayoral",
""
],
[
"Rosas",
"Aday Muñiz",
""
],
[
"Mendia",
"Gorka Olalde",
""
],
[
"Juan",
"Lander Usategi San",
""
],
[
"Ugarte",
"Irati Zamalloa",
""
],
[
"Gil-Uriarte",
"Endika",
""
],
[
"Tews",
"Erik",
""
],
[
"Peter",
"Andreas",
""
]
] | Robots have gained relevance in society, increasingly performing critical tasks. Nonetheless, robot security is being underestimated. Robotics security is a complex landscape, which often requires a cross-disciplinar perspective to which classical security lags behind. To address this issue, we present the Robot Security Framework (RSF), a methodology to perform systematic security assessments in robots. We propose, adapt and develop specific terminology and provide guidelines to enable a holistic security assessment following four main layers (Physical, Network, Firmware and Application). We argue that modern robotics should regard as equally relevant internal and external communication security. Finally, we advocate against "security by obscurity". We conclude that the field of security in robotics deserves further research efforts. |
1503.06959 | Luca Baroffio | Luca Baroffio, Matteo Cesana, Alessandro Redondi, Marco Tagliasacchi | Fast keypoint detection in video sequences | submitted to IEEE International Conference on Image Processing 2015 | null | 10.1109/ICASSP.2016.7471895 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of computer vision tasks exploit a succinct representation of the
visual content in the form of sets of local features. Given an input image,
feature extraction algorithms identify a set of keypoints and assign to each of
them a description vector, based on the characteristics of the visual content
surrounding the interest point. Several tasks might require local features to
be extracted from a video sequence, on a frame-by-frame basis. Although
temporal downsampling has been proven to be an effective solution for mobile
augmented reality and visual search, high temporal resolution is a key
requirement for time-critical applications such as object tracking, event
recognition, pedestrian detection, surveillance. In recent years, more and more
computationally efficient visual feature detectors and decriptors have been
proposed. Nonetheless, such approaches are tailored to still images. In this
paper we propose a fast keypoint detection algorithm for video sequences, that
exploits the temporal coherence of the sequence of keypoints. According to the
proposed method, each frame is preprocessed so as to identify the parts of the
input frame for which keypoint detection and description need to be performed.
Our experiments show that it is possible to achieve a reduction in
computational time of up to 40%, without significantly affecting the task
accuracy.
| [
{
"created": "Tue, 24 Mar 2015 09:28:28 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Baroffio",
"Luca",
""
],
[
"Cesana",
"Matteo",
""
],
[
"Redondi",
"Alessandro",
""
],
[
"Tagliasacchi",
"Marco",
""
]
] | A number of computer vision tasks exploit a succinct representation of the visual content in the form of sets of local features. Given an input image, feature extraction algorithms identify a set of keypoints and assign to each of them a description vector, based on the characteristics of the visual content surrounding the interest point. Several tasks might require local features to be extracted from a video sequence, on a frame-by-frame basis. Although temporal downsampling has been proven to be an effective solution for mobile augmented reality and visual search, high temporal resolution is a key requirement for time-critical applications such as object tracking, event recognition, pedestrian detection, surveillance. In recent years, more and more computationally efficient visual feature detectors and decriptors have been proposed. Nonetheless, such approaches are tailored to still images. In this paper we propose a fast keypoint detection algorithm for video sequences, that exploits the temporal coherence of the sequence of keypoints. According to the proposed method, each frame is preprocessed so as to identify the parts of the input frame for which keypoint detection and description need to be performed. Our experiments show that it is possible to achieve a reduction in computational time of up to 40%, without significantly affecting the task accuracy. |
2102.06073 | Chi Ian Tang | Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, Soren Brage,
Nick Wareham and Cecilia Mascolo | SelfHAR: Improving Human Activity Recognition through Self-training with
Unlabeled Data | Accepted for publication in Proceedings of the ACM on Interactive,
Mobile, Wearable and Ubiquitous Technologies (IMWUT) 2021 | null | 10.1145/3448112 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning and deep learning have shown great promise in mobile sensing
applications, including Human Activity Recognition. However, the performance of
such models in real-world settings largely depends on the availability of large
datasets that captures diverse behaviors. Recently, studies in computer vision
and natural language processing have shown that leveraging massive amounts of
unlabeled data enables performance on par with state-of-the-art supervised
models.
In this work, we present SelfHAR, a semi-supervised model that effectively
learns to leverage unlabeled mobile sensing datasets to complement small
labeled datasets. Our approach combines teacher-student self-training, which
distills the knowledge of unlabeled and labeled datasets while allowing for
data augmentation, and multi-task self-supervision, which learns robust
signal-level representations by predicting distorted versions of the input.
We evaluated SelfHAR on various HAR datasets and showed state-of-the-art
performance over supervised and previous semi-supervised approaches, with up to
12% increase in F1 score using the same number of model parameters at
inference. Furthermore, SelfHAR is data-efficient, reaching similar performance
using up to 10 times less labeled data compared to supervised approaches. Our
work not only achieves state-of-the-art performance in a diverse set of HAR
datasets, but also sheds light on how pre-training tasks may affect downstream
performance.
| [
{
"created": "Thu, 11 Feb 2021 15:40:35 GMT",
"version": "v1"
}
] | 2021-02-12 | [
[
"Tang",
"Chi Ian",
""
],
[
"Perez-Pozuelo",
"Ignacio",
""
],
[
"Spathis",
"Dimitris",
""
],
[
"Brage",
"Soren",
""
],
[
"Wareham",
"Nick",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | Machine learning and deep learning have shown great promise in mobile sensing applications, including Human Activity Recognition. However, the performance of such models in real-world settings largely depends on the availability of large datasets that captures diverse behaviors. Recently, studies in computer vision and natural language processing have shown that leveraging massive amounts of unlabeled data enables performance on par with state-of-the-art supervised models. In this work, we present SelfHAR, a semi-supervised model that effectively learns to leverage unlabeled mobile sensing datasets to complement small labeled datasets. Our approach combines teacher-student self-training, which distills the knowledge of unlabeled and labeled datasets while allowing for data augmentation, and multi-task self-supervision, which learns robust signal-level representations by predicting distorted versions of the input. We evaluated SelfHAR on various HAR datasets and showed state-of-the-art performance over supervised and previous semi-supervised approaches, with up to 12% increase in F1 score using the same number of model parameters at inference. Furthermore, SelfHAR is data-efficient, reaching similar performance using up to 10 times less labeled data compared to supervised approaches. Our work not only achieves state-of-the-art performance in a diverse set of HAR datasets, but also sheds light on how pre-training tasks may affect downstream performance. |
2406.14266 | Anna Wr\'oblewska | Anna Wr\'oblewska, Marcel Witas, Kinga Fra\'nczak, Arkadiusz Knia\'z,
Siew Ann Cheong, Tan Seng Chee, Janusz Ho{\l}yst, Marcin Paprzycki | Intelligent Interface: Enhancing Lecture Engagement with Didactic
Activity Summaries | 9 pages, 6 figures | null | null | null | cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, multiple applications of machine learning have been introduced.
They include various possibilities arising when image analysis methods are
applied to, broadly understood, video streams. In this context, a novel tool,
developed for academic educators to enhance the teaching process by automating,
summarizing, and offering prompt feedback on conducting lectures, has been
developed. The implemented prototype utilizes machine learning-based techniques
to recognise selected didactic and behavioural teachers' features within
lecture video recordings.
Specifically, users (teachers) can upload their lecture videos, which are
preprocessed and analysed using machine learning models. Next, users can view
summaries of recognized didactic features through interactive charts and
tables. Additionally, stored ML-based prediction results support comparisons
between lectures based on their didactic content. In the developed application
text-based models trained on lecture transcriptions, with enhancements to the
transcription quality, by adopting an automatic speech recognition solution are
applied. Furthermore, the system offers flexibility for (future) integration of
new/additional machine-learning models and software modules for image and video
analysis.
| [
{
"created": "Thu, 20 Jun 2024 12:45:23 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Wróblewska",
"Anna",
""
],
[
"Witas",
"Marcel",
""
],
[
"Frańczak",
"Kinga",
""
],
[
"Kniaź",
"Arkadiusz",
""
],
[
"Cheong",
"Siew Ann",
""
],
[
"Chee",
"Tan Seng",
""
],
[
"Hołyst",
"Janusz",
""
],
[
"Paprzycki",
"Marcin",
""
]
] | Recently, multiple applications of machine learning have been introduced. They include various possibilities arising when image analysis methods are applied to, broadly understood, video streams. In this context, a novel tool, developed for academic educators to enhance the teaching process by automating, summarizing, and offering prompt feedback on conducting lectures, has been developed. The implemented prototype utilizes machine learning-based techniques to recognise selected didactic and behavioural teachers' features within lecture video recordings. Specifically, users (teachers) can upload their lecture videos, which are preprocessed and analysed using machine learning models. Next, users can view summaries of recognized didactic features through interactive charts and tables. Additionally, stored ML-based prediction results support comparisons between lectures based on their didactic content. In the developed application text-based models trained on lecture transcriptions, with enhancements to the transcription quality, by adopting an automatic speech recognition solution are applied. Furthermore, the system offers flexibility for (future) integration of new/additional machine-learning models and software modules for image and video analysis. |
2307.02415 | Martin Costa | Sayan Bhattacharya and Mart\'in Costa and Nadav Panski and Shay
Solomon | Density-Sensitive Algorithms for $(\Delta + 1)$-Edge Coloring | To appear at ESA'24 | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Vizing's theorem asserts the existence of a $(\Delta+1)$-edge coloring for
any graph $G$, where $\Delta = \Delta(G)$ denotes the maximum degree of $G$.
Several polynomial time $(\Delta+1)$-edge coloring algorithms are known, and
the state-of-the-art running time (up to polylogarithmic factors) is
$\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})$, by Gabow et al.\ from
1985, where $n$ and $m$ denote the number of vertices and edges in the graph,
respectively. (The $\tilde{O}$ notation suppresses polylogarithmic factors.)
Recently, Sinnamon shaved off a polylogarithmic factor from the time bound of
Gabow et al.
The {arboricity} $\alpha = \alpha(G)$ of a graph $G$ is the minimum number of
edge-disjoint forests into which its edge set can be partitioned, and it is a
measure of the graph's "uniform density". While $\alpha \le \Delta$ in any
graph, many natural and real-world graphs exhibit a significant separation
between $\alpha$ and $\Delta$.
In this work we design a $(\Delta+1)$-edge coloring algorithm with a running
time of $\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})\cdot
\frac{\alpha}{\Delta}$, thus improving the longstanding time barrier by a
factor of $\frac{\alpha}{\Delta}$. In particular, we achieve a near-linear
runtime for bounded arboricity graphs (i.e., $\alpha = \tilde{O}(1)$) as well
as when $\alpha = \tilde{O}(\frac{\Delta}{\sqrt{n}})$. Our algorithm builds on
Sinnamon's algorithm, and can be viewed as a density-sensitive refinement of
it.
| [
{
"created": "Wed, 5 Jul 2023 16:37:32 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 14:03:46 GMT",
"version": "v2"
}
] | 2024-08-05 | [
[
"Bhattacharya",
"Sayan",
""
],
[
"Costa",
"Martín",
""
],
[
"Panski",
"Nadav",
""
],
[
"Solomon",
"Shay",
""
]
] | Vizing's theorem asserts the existence of a $(\Delta+1)$-edge coloring for any graph $G$, where $\Delta = \Delta(G)$ denotes the maximum degree of $G$. Several polynomial time $(\Delta+1)$-edge coloring algorithms are known, and the state-of-the-art running time (up to polylogarithmic factors) is $\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})$, by Gabow et al.\ from 1985, where $n$ and $m$ denote the number of vertices and edges in the graph, respectively. (The $\tilde{O}$ notation suppresses polylogarithmic factors.) Recently, Sinnamon shaved off a polylogarithmic factor from the time bound of Gabow et al. The {arboricity} $\alpha = \alpha(G)$ of a graph $G$ is the minimum number of edge-disjoint forests into which its edge set can be partitioned, and it is a measure of the graph's "uniform density". While $\alpha \le \Delta$ in any graph, many natural and real-world graphs exhibit a significant separation between $\alpha$ and $\Delta$. In this work we design a $(\Delta+1)$-edge coloring algorithm with a running time of $\tilde{O}(\min\{m \cdot \sqrt{n}, m \cdot \Delta\})\cdot \frac{\alpha}{\Delta}$, thus improving the longstanding time barrier by a factor of $\frac{\alpha}{\Delta}$. In particular, we achieve a near-linear runtime for bounded arboricity graphs (i.e., $\alpha = \tilde{O}(1)$) as well as when $\alpha = \tilde{O}(\frac{\Delta}{\sqrt{n}})$. Our algorithm builds on Sinnamon's algorithm, and can be viewed as a density-sensitive refinement of it. |
1407.5981 | Tom Crick | Tom Crick, Benjamin A. Hall and Samin Ishtiaq | "Can I Implement Your Algorithm?": A Model for Reproducible Research
Software | Accepted for the 2nd Workshop on Sustainable Software for Science:
Practice and Experiences (WSSSPE2); 5 pages, LaTeX | null | null | null | cs.SE cs.CE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The reproduction and replication of novel results has become a major issue
for a number of scientific disciplines. In computer science and related
computational disciplines such as systems biology, the issues closely revolve
around the ability to implement novel algorithms and approaches. Taking an
approach from the literature and applying it to a new codebase frequently
requires local knowledge missing from the published manuscripts and project
websites. Alongside this issue, benchmarking, and the development of fair ---
and widely available --- benchmark sets present another barrier.
In this paper, we outline several suggestions to address these issues, driven
by specific examples from a range of scientific domains. Finally, based on
these suggestions, we propose a new open platform for scientific software
development which effectively isolates specific dependencies from the
individual researcher and their workstation and allows faster, more powerful
sharing of the results of scientific software engineering.
| [
{
"created": "Tue, 22 Jul 2014 19:29:34 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Sep 2014 13:35:18 GMT",
"version": "v2"
}
] | 2014-09-17 | [
[
"Crick",
"Tom",
""
],
[
"Hall",
"Benjamin A.",
""
],
[
"Ishtiaq",
"Samin",
""
]
] | The reproduction and replication of novel results has become a major issue for a number of scientific disciplines. In computer science and related computational disciplines such as systems biology, the issues closely revolve around the ability to implement novel algorithms and approaches. Taking an approach from the literature and applying it to a new codebase frequently requires local knowledge missing from the published manuscripts and project websites. Alongside this issue, benchmarking, and the development of fair --- and widely available --- benchmark sets present another barrier. In this paper, we outline several suggestions to address these issues, driven by specific examples from a range of scientific domains. Finally, based on these suggestions, we propose a new open platform for scientific software development which effectively isolates specific dependencies from the individual researcher and their workstation and allows faster, more powerful sharing of the results of scientific software engineering. |
1405.2063 | Bob Scurlock | Bob J. Scurlock | Use of ARAS 360 to Facilitate Rapid Development of Articulated Total
Body Biomechanical Physics Simulations | null | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of 3-dimensional environments to be used within a
biomechanical physics simulation framework, such as Articulated Total Body, can
be laborious and time intensive. This brief article demonstrates how the ARAS
360 software package can aid the user by speeding up development time.
| [
{
"created": "Thu, 17 Apr 2014 19:32:40 GMT",
"version": "v1"
}
] | 2014-05-09 | [
[
"Scurlock",
"Bob J.",
""
]
] | The development of 3-dimensional environments to be used within a biomechanical physics simulation framework, such as Articulated Total Body, can be laborious and time intensive. This brief article demonstrates how the ARAS 360 software package can aid the user by speeding up development time. |
1912.12861 | Sofia Dokuka | Sofia Dokuka, Ivan Zaikin, Kate Furman, Maksim Tsvetovat and Alex
Furman | Wisdom of collaborators: a peer-review approach to performance appraisal | null | null | null | null | cs.SI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual performance and reputation within a company are major factors that
influence wage distribution, promotion and firing. Due to the complexity and
collaborative nature of contemporary business processes, the evaluation of
individual impact in the majority of organizations is an ambiguous and
non-trivial task. Existing performance appraisal approaches are often affected
by individuals biased judgements, and organizations are dissatisfied with the
results of evaluations. We assert that employees can provide accurate
measurement of their peer performance in a complex collaborative environment.
We propose a novel metric, the Peer Rank Score (PRS), that evaluates individual
reputations and the non-quantifiable individual impact. PRS is based on
pairwise comparisons of employees. We show high robustness of the algorithm on
simulations and empirically validate it for a genetic testing company on more
than one thousand employees using peer reviews over the course of three years.
| [
{
"created": "Mon, 30 Dec 2019 09:23:51 GMT",
"version": "v1"
}
] | 2020-01-01 | [
[
"Dokuka",
"Sofia",
""
],
[
"Zaikin",
"Ivan",
""
],
[
"Furman",
"Kate",
""
],
[
"Tsvetovat",
"Maksim",
""
],
[
"Furman",
"Alex",
""
]
] | Individual performance and reputation within a company are major factors that influence wage distribution, promotion and firing. Due to the complexity and collaborative nature of contemporary business processes, the evaluation of individual impact in the majority of organizations is an ambiguous and non-trivial task. Existing performance appraisal approaches are often affected by individuals biased judgements, and organizations are dissatisfied with the results of evaluations. We assert that employees can provide accurate measurement of their peer performance in a complex collaborative environment. We propose a novel metric, the Peer Rank Score (PRS), that evaluates individual reputations and the non-quantifiable individual impact. PRS is based on pairwise comparisons of employees. We show high robustness of the algorithm on simulations and empirically validate it for a genetic testing company on more than one thousand employees using peer reviews over the course of three years. |
2105.09696 | Samuel Pagliarini | Mateus Saquetti, Raphael M. Brum, Bruno Zatt, Samuel Pagliarini,
Weverton Cordeiro, Jose R. Azambuja | A Terabit Hybrid FPGA-ASIC Platform for Switch Virtualization | ISVLSI | null | null | null | cs.AR cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The roll-out of technologies like 5G and the need for multi-terabit bandwidth
in backbone networks requires networking companies to make significant
investments to keep up with growing service demands. For lower capital
expenditure and faster time-to-market, companies can resort to
anything-as-a-service providers to lease virtual resources. Nevertheless,
existing virtualization technologies are still lagging behind next-generation
networks' requirements. This paper breaks the terabit barrier by introducing a
hybrid FPGA-ASIC architecture to virtualize programmable forwarding planes. In
contrast to existing solutions, our architecture involves an ASIC that
multiplexes network flows between programmable virtual switches running in an
FPGA capable of full and partial reconfiguration, enabling virtual switch
hot-swapping. Our evaluation shows the feasibility of a switch virtualization
architecture capable of achieving a combined throughput of 3.2 Tbps by having
up to 26 virtual switch instances in parallel with low resource occupation
overhead.
| [
{
"created": "Thu, 20 May 2021 12:17:49 GMT",
"version": "v1"
}
] | 2021-05-21 | [
[
"Saquetti",
"Mateus",
""
],
[
"Brum",
"Raphael M.",
""
],
[
"Zatt",
"Bruno",
""
],
[
"Pagliarini",
"Samuel",
""
],
[
"Cordeiro",
"Weverton",
""
],
[
"Azambuja",
"Jose R.",
""
]
] | The roll-out of technologies like 5G and the need for multi-terabit bandwidth in backbone networks requires networking companies to make significant investments to keep up with growing service demands. For lower capital expenditure and faster time-to-market, companies can resort to anything-as-a-service providers to lease virtual resources. Nevertheless, existing virtualization technologies are still lagging behind next-generation networks' requirements. This paper breaks the terabit barrier by introducing a hybrid FPGA-ASIC architecture to virtualize programmable forwarding planes. In contrast to existing solutions, our architecture involves an ASIC that multiplexes network flows between programmable virtual switches running in an FPGA capable of full and partial reconfiguration, enabling virtual switch hot-swapping. Our evaluation shows the feasibility of a switch virtualization architecture capable of achieving a combined throughput of 3.2 Tbps by having up to 26 virtual switch instances in parallel with low resource occupation overhead. |
2106.14011 | Jordan Masakuna F | Jordan F. Masakuna and Steve Kroon | Distributed Identification of Central Nodes with Less Communication | 11 pages of work on distributed assessment of network centrality. It
is being prepared and finalised to be submitted to a conference | null | null | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | This paper is concerned with distributed detection of central nodes in
complex networks using closeness centrality. Closeness centrality plays an
essential role in network analysis. Evaluating closeness centrality exactly
requires complete knowledge of the network; for large networks, this may be
inefficient, so closeness centrality should be approximated. Distributed tasks
such as leader election can make effective use of centrality information for
highly central nodes, but complete network information is not locally
available. This paper refines a distributed centrality computation algorithm by
You et al. [24] by pruning nodes which are almost certainly not most central.
For example, in a large network, leave nodes can not play a central role. This
leads to a reduction in the number of messages exchanged to determine the
centrality of the remaining nodes. Our results show that our approach reduces
the number of messages for networks which contain many prunable nodes. Our
results also show that reducing the number of messages may
| [
{
"created": "Sat, 26 Jun 2021 12:19:33 GMT",
"version": "v1"
}
] | 2021-06-29 | [
[
"Masakuna",
"Jordan F.",
""
],
[
"Kroon",
"Steve",
""
]
] | This paper is concerned with distributed detection of central nodes in complex networks using closeness centrality. Closeness centrality plays an essential role in network analysis. Evaluating closeness centrality exactly requires complete knowledge of the network; for large networks, this may be inefficient, so closeness centrality should be approximated. Distributed tasks such as leader election can make effective use of centrality information for highly central nodes, but complete network information is not locally available. This paper refines a distributed centrality computation algorithm by You et al. [24] by pruning nodes which are almost certainly not most central. For example, in a large network, leave nodes can not play a central role. This leads to a reduction in the number of messages exchanged to determine the centrality of the remaining nodes. Our results show that our approach reduces the number of messages for networks which contain many prunable nodes. Our results also show that reducing the number of messages may |
1911.11750 | Nino Arsov | Nino Arsov, Milan Dukovski, Blagoja Evkoski, Stefan Cvetkovski | A Measure of Similarity in Textual Data Using Spearman's Rank
Correlation Coefficient | null | null | null | null | cs.LG cs.CL cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last decade, many diverse advances have occurred in the field of
information extraction from data. Information extraction in its simplest form
takes place in computing environments, where structured data can be extracted
through a series of queries. The continuous expansion of quantities of data
have therefore provided an opportunity for knowledge extraction (KE) from a
textual document (TD). A typical problem of this kind is the extraction of
common characteristics and knowledge from a group of TDs, with the possibility
to group such similar TDs in a process known as clustering. In this paper we
present a technique for such KE among a group of TDs related to the common
characteristics and meaning of their content. Our technique is based on the
Spearman's Rank Correlation Coefficient (SRCC), for which the conducted
experiments have proven to be comprehensive measure to achieve a high-quality
KE.
| [
{
"created": "Tue, 26 Nov 2019 18:38:59 GMT",
"version": "v1"
}
] | 2019-11-27 | [
[
"Arsov",
"Nino",
""
],
[
"Dukovski",
"Milan",
""
],
[
"Evkoski",
"Blagoja",
""
],
[
"Cvetkovski",
"Stefan",
""
]
] | In the last decade, many diverse advances have occurred in the field of information extraction from data. Information extraction in its simplest form takes place in computing environments, where structured data can be extracted through a series of queries. The continuous expansion of quantities of data have therefore provided an opportunity for knowledge extraction (KE) from a textual document (TD). A typical problem of this kind is the extraction of common characteristics and knowledge from a group of TDs, with the possibility to group such similar TDs in a process known as clustering. In this paper we present a technique for such KE among a group of TDs related to the common characteristics and meaning of their content. Our technique is based on the Spearman's Rank Correlation Coefficient (SRCC), for which the conducted experiments have proven to be comprehensive measure to achieve a high-quality KE. |
2203.00914 | Hui Yuan | Hao Liu, Hui Yuan, Junhui Hou, Raouf Hamzaoui, Wei Gao | PUFA-GAN: A Frequency-Aware Generative Adversarial Network for 3D Point
Cloud Upsampling | null | null | 10.1109/TIP.2022.3222918 | null | cs.CV cs.MM eess.IV | http://creativecommons.org/publicdomain/zero/1.0/ | We propose a generative adversarial network for point cloud upsampling, which
can not only make the upsampled points evenly distributed on the underlying
surface but also efficiently generate clean high frequency regions. The
generator of our network includes a dynamic graph hierarchical residual
aggregation unit and a hierarchical residual aggregation unit for point feature
extraction and upsampling, respectively. The former extracts multiscale
point-wise descriptive features, while the latter captures rich feature details
with hierarchical residuals. To generate neat edges, our discriminator uses a
graph filter to extract and retain high frequency points. The generated high
resolution point cloud and corresponding high frequency points help the
discriminator learn the global and high frequency properties of the point
cloud. We also propose an identity distribution loss function to make sure that
the upsampled points remain on the underlying surface of the input low
resolution point cloud. To assess the regularity of the upsampled points in
high frequency regions, we introduce two evaluation metrics. Objective and
subjective results demonstrate that the visual quality of the upsampled point
clouds generated by our method is better than that of the state-of-the-art
methods.
| [
{
"created": "Wed, 2 Mar 2022 07:47:46 GMT",
"version": "v1"
}
] | 2022-12-14 | [
[
"Liu",
"Hao",
""
],
[
"Yuan",
"Hui",
""
],
[
"Hou",
"Junhui",
""
],
[
"Hamzaoui",
"Raouf",
""
],
[
"Gao",
"Wei",
""
]
] | We propose a generative adversarial network for point cloud upsampling, which can not only make the upsampled points evenly distributed on the underlying surface but also efficiently generate clean high frequency regions. The generator of our network includes a dynamic graph hierarchical residual aggregation unit and a hierarchical residual aggregation unit for point feature extraction and upsampling, respectively. The former extracts multiscale point-wise descriptive features, while the latter captures rich feature details with hierarchical residuals. To generate neat edges, our discriminator uses a graph filter to extract and retain high frequency points. The generated high resolution point cloud and corresponding high frequency points help the discriminator learn the global and high frequency properties of the point cloud. We also propose an identity distribution loss function to make sure that the upsampled points remain on the underlying surface of the input low resolution point cloud. To assess the regularity of the upsampled points in high frequency regions, we introduce two evaluation metrics. Objective and subjective results demonstrate that the visual quality of the upsampled point clouds generated by our method is better than that of the state-of-the-art methods. |
2305.03427 | Nurettin Turan | Nurettin Turan, Benedikt Fesl, Wolfgang Utschick | Enhanced Low-Complexity FDD System Feedback with Variable Bit Lengths
via Generative Modeling | null | null | null | null | cs.IT eess.SP math.IT | http://creativecommons.org/licenses/by/4.0/ | Recently, a versatile limited feedback scheme based on a Gaussian mixture
model (GMM) was proposed for frequency division duplex (FDD) systems. This
scheme provides high flexibility regarding various system parameters and is
applicable to both point-to-point multiple-input multiple-output (MIMO) and
multi-user MIMO (MU-MIMO) communications. The GMM is learned to cover the
operation of all mobile terminals (MTs) located inside the base station (BS)
cell, and each MT only needs to evaluate its strongest mixture component as
feedback, eliminating the need for channel estimation at the MT. In this work,
we extend the GMM-based feedback scheme to variable feedback lengths by
leveraging a single learned GMM through merging or pruning of dispensable
mixture components. Additionally, the GMM covariances are restricted to
Toeplitz or circulant structure through model-based insights. These extensions
significantly reduce the offloading amount and enhance the clustering ability
of the GMM which, in turn, leads to an improved system performance. Simulation
results for both point-to-point and multi-user systems demonstrate the
effectiveness of the proposed extensions.
| [
{
"created": "Fri, 5 May 2023 11:02:01 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Nov 2023 09:40:14 GMT",
"version": "v2"
}
] | 2023-11-29 | [
[
"Turan",
"Nurettin",
""
],
[
"Fesl",
"Benedikt",
""
],
[
"Utschick",
"Wolfgang",
""
]
] | Recently, a versatile limited feedback scheme based on a Gaussian mixture model (GMM) was proposed for frequency division duplex (FDD) systems. This scheme provides high flexibility regarding various system parameters and is applicable to both point-to-point multiple-input multiple-output (MIMO) and multi-user MIMO (MU-MIMO) communications. The GMM is learned to cover the operation of all mobile terminals (MTs) located inside the base station (BS) cell, and each MT only needs to evaluate its strongest mixture component as feedback, eliminating the need for channel estimation at the MT. In this work, we extend the GMM-based feedback scheme to variable feedback lengths by leveraging a single learned GMM through merging or pruning of dispensable mixture components. Additionally, the GMM covariances are restricted to Toeplitz or circulant structure through model-based insights. These extensions significantly reduce the offloading amount and enhance the clustering ability of the GMM which, in turn, leads to an improved system performance. Simulation results for both point-to-point and multi-user systems demonstrate the effectiveness of the proposed extensions. |
1703.02899 | Andreas Doerr | Andreas Doerr, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal,
Sebastian Trimpe | Model-Based Policy Search for Automatic Tuning of Multivariate PID
Controllers | Accepted final version to appear in 2017 IEEE International
Conference on Robotics and Automation (ICRA) | null | null | null | cs.LG cs.RO cs.SY stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PID control architectures are widely used in industrial applications. Despite
their low number of open parameters, tuning multiple, coupled PID controllers
can become tedious in practice. In this paper, we extend PILCO, a model-based
policy search framework, to automatically tune multivariate PID controllers
purely based on data observed on an otherwise unknown system. The system's
state is extended appropriately to frame the PID policy as a static state
feedback policy. This renders PID tuning possible as the solution of a finite
horizon optimal control problem without further a priori knowledge. The
framework is applied to the task of balancing an inverted pendulum on a seven
degree-of-freedom robotic arm, thereby demonstrating its capabilities of fast
and data-efficient policy learning, even on complex real world problems.
| [
{
"created": "Wed, 8 Mar 2017 16:28:17 GMT",
"version": "v1"
}
] | 2017-03-09 | [
[
"Doerr",
"Andreas",
""
],
[
"Nguyen-Tuong",
"Duy",
""
],
[
"Marco",
"Alonso",
""
],
[
"Schaal",
"Stefan",
""
],
[
"Trimpe",
"Sebastian",
""
]
] | PID control architectures are widely used in industrial applications. Despite their low number of open parameters, tuning multiple, coupled PID controllers can become tedious in practice. In this paper, we extend PILCO, a model-based policy search framework, to automatically tune multivariate PID controllers purely based on data observed on an otherwise unknown system. The system's state is extended appropriately to frame the PID policy as a static state feedback policy. This renders PID tuning possible as the solution of a finite horizon optimal control problem without further a priori knowledge. The framework is applied to the task of balancing an inverted pendulum on a seven degree-of-freedom robotic arm, thereby demonstrating its capabilities of fast and data-efficient policy learning, even on complex real world problems. |
2403.04605 | Diego Mesquita | Erik Nascimento, Diego Mesquita, Samuel Kaski, Amauri H Souza | In-n-Out: Calibrating Graph Neural Networks for Link Prediction | 18 pages, 4 figures, 8 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Deep neural networks are notoriously miscalibrated, i.e., their outputs do
not reflect the true probability of the event we aim to predict. While networks
for tabular or image data are usually overconfident, recent works have shown
that graph neural networks (GNNs) show the opposite behavior for node-level
classification. But what happens when we are predicting links? We show that, in
this case, GNNs often exhibit a mixed behavior. More specifically, they may be
overconfident in negative predictions while being underconfident in positive
ones. Based on this observation, we propose IN-N-OUT, the first-ever method to
calibrate GNNs for link prediction. IN-N-OUT is based on two simple intuitions:
i) attributing true/false labels to an edge while respecting a GNNs prediction
should cause but small fluctuations in that edge's embedding; and, conversely,
ii) if we label that same edge contradicting our GNN, embeddings should change
more substantially. An extensive experimental campaign shows that IN-N-OUT
significantly improves the calibration of GNNs in link prediction, consistently
outperforming the baselines available -- which are not designed for this
specific task.
| [
{
"created": "Thu, 7 Mar 2024 15:54:46 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 14:49:34 GMT",
"version": "v2"
}
] | 2024-03-11 | [
[
"Nascimento",
"Erik",
""
],
[
"Mesquita",
"Diego",
""
],
[
"Kaski",
"Samuel",
""
],
[
"Souza",
"Amauri H",
""
]
] | Deep neural networks are notoriously miscalibrated, i.e., their outputs do not reflect the true probability of the event we aim to predict. While networks for tabular or image data are usually overconfident, recent works have shown that graph neural networks (GNNs) show the opposite behavior for node-level classification. But what happens when we are predicting links? We show that, in this case, GNNs often exhibit a mixed behavior. More specifically, they may be overconfident in negative predictions while being underconfident in positive ones. Based on this observation, we propose IN-N-OUT, the first-ever method to calibrate GNNs for link prediction. IN-N-OUT is based on two simple intuitions: i) attributing true/false labels to an edge while respecting a GNNs prediction should cause but small fluctuations in that edge's embedding; and, conversely, ii) if we label that same edge contradicting our GNN, embeddings should change more substantially. An extensive experimental campaign shows that IN-N-OUT significantly improves the calibration of GNNs in link prediction, consistently outperforming the baselines available -- which are not designed for this specific task. |
2208.00301 | Chen Chen | Yichen Han, Christopher Bo Han, Chen Chen, Peng Wei Lee, Michael
Hogarth, Alison A. Moore, Nadir Weibel, Emilia Farcas | Towards Visualization of Time-Series Ecological Momentary Assessment
(EMA) Data on Standalone Voice-First Virtual Assistants | 4 pages, The 24th International ACM SIGACCESS Conference on Computers
and Accessibility | null | 10.1145/3517428.3550398 | null | cs.HC cs.CY | http://creativecommons.org/licenses/by/4.0/ | Population aging is an increasingly important consideration for health care
in the 21th century, and continuing to have access and interact with digital
health information is a key challenge for aging populations. Voice-based
Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of
Life (QoL) of older adults, and coupled with Ecological Momentary Assessments
(EMA) they can be effective to collect important health information from older
adults, especially when it comes to repeated time-based events. However, this
same EMA data is hard to access for the older adult: although the newest IVAs
are equipped with a display, the effectiveness of visualizing time-series based
EMA data on standalone IVAs has not been explored. To investigate the potential
opportunities for visualizing time-series based EMA data on standalone IVAs, we
designed a prototype system, where older adults are able to query and examine
the time-series EMA data on Amazon Echo Show - a widely used commercially
available standalone screen-based IVA. We conducted a preliminary
semi-structured interview with a geriatrician and an older adult, and
identified three findings that should be carefully considered when designing
such visualizations.
| [
{
"created": "Sat, 30 Jul 2022 20:03:15 GMT",
"version": "v1"
}
] | 2022-08-02 | [
[
"Han",
"Yichen",
""
],
[
"Han",
"Christopher Bo",
""
],
[
"Chen",
"Chen",
""
],
[
"Lee",
"Peng Wei",
""
],
[
"Hogarth",
"Michael",
""
],
[
"Moore",
"Alison A.",
""
],
[
"Weibel",
"Nadir",
""
],
[
"Farcas",
"Emilia",
""
]
] | Population aging is an increasingly important consideration for health care in the 21th century, and continuing to have access and interact with digital health information is a key challenge for aging populations. Voice-based Intelligent Virtual Assistants (IVAs) are promising to improve the Quality of Life (QoL) of older adults, and coupled with Ecological Momentary Assessments (EMA) they can be effective to collect important health information from older adults, especially when it comes to repeated time-based events. However, this same EMA data is hard to access for the older adult: although the newest IVAs are equipped with a display, the effectiveness of visualizing time-series based EMA data on standalone IVAs has not been explored. To investigate the potential opportunities for visualizing time-series based EMA data on standalone IVAs, we designed a prototype system, where older adults are able to query and examine the time-series EMA data on Amazon Echo Show - a widely used commercially available standalone screen-based IVA. We conducted a preliminary semi-structured interview with a geriatrician and an older adult, and identified three findings that should be carefully considered when designing such visualizations. |
1909.13162 | Aneek Barman Roy | Aneek Barman Roy | Translation, Sentiment and Voices: A Computational Model to Translate
and Analyse Voices from Real-Time Video Calling | 79 Pages, 19 Tables, 24 Figures, A M.Sc Dissertation | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With internet quickly becoming an easy access to many, voice calling over
internet is slowly gaining momentum. Individuals has been engaging in video
communication across the world in different languages. The decade saw the
emergence of language translation using neural networks as well. With more data
being generated in audio and visual forms, there has become a need and a
challenge to analyse such information for many researchers from academia and
industry. The availability of video chat corpora is limited as organizations
protect user privacy and ensure data security. For this reason, an audio-visual
communication system (VidALL) has been developed and audio-speeches were
extracted. To understand human nature while answering a video call, an analysis
was conducted where polarity and vocal intensity were considered as parameters.
Simultaneously, a translation model using a neural approach was developed to
translate English sentences to French. Simple RNN-based and Embedded-RNN based
models were used for the translation model. BLEU score and target sentence
comparators were used to check sentence correctness. Embedded-RNN showed an
accuracy of 88.71 percentage and predicted correct sentences. A key finding
suggest that polarity is a good estimator to understand human emotion.
| [
{
"created": "Sat, 28 Sep 2019 22:11:03 GMT",
"version": "v1"
}
] | 2019-10-01 | [
[
"Roy",
"Aneek Barman",
""
]
] | With internet quickly becoming an easy access to many, voice calling over internet is slowly gaining momentum. Individuals has been engaging in video communication across the world in different languages. The decade saw the emergence of language translation using neural networks as well. With more data being generated in audio and visual forms, there has become a need and a challenge to analyse such information for many researchers from academia and industry. The availability of video chat corpora is limited as organizations protect user privacy and ensure data security. For this reason, an audio-visual communication system (VidALL) has been developed and audio-speeches were extracted. To understand human nature while answering a video call, an analysis was conducted where polarity and vocal intensity were considered as parameters. Simultaneously, a translation model using a neural approach was developed to translate English sentences to French. Simple RNN-based and Embedded-RNN based models were used for the translation model. BLEU score and target sentence comparators were used to check sentence correctness. Embedded-RNN showed an accuracy of 88.71 percentage and predicted correct sentences. A key finding suggest that polarity is a good estimator to understand human emotion. |
2205.04665 | Tian-Sheuan Chang | Yu-Hsiang Chiang, Tian-Sheuan Chang and Shyh Jye Jou | A 14uJ/Decision Keyword Spotting Accelerator with In-SRAM-Computing and
On Chip Learning for Customization | 10 pages, 18 figures, to be published in IEEE Transaction on VLSI,
2022 | null | 10.1109/TVLSI.2022.3172685 | null | cs.AR cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Keyword spotting has gained popularity as a natural way to interact with
consumer devices in recent years. However, because of its always-on nature and
the variety of speech, it necessitates a low-power design as well as user
customization. This paper describes a low-power, energy-efficient keyword
spotting accelerator with SRAM based in-memory computing (IMC) and on-chip
learning for user customization. However, IMC is constrained by macro size,
limited precision, and non-ideal effects. To address the issues mentioned
above, this paper proposes bias compensation and fine-tuning using an IMC-aware
model design. Furthermore, because learning with low-precision edge devices
results in zero error and gradient values due to quantization, this paper
proposes error scaling and small gradient accumulation to achieve the same
accuracy as ideal model training. The simulation results show that with user
customization, we can recover the accuracy loss from 51.08\% to 89.76\% with
compensation and fine-tuning and further improve to 96.71\% with customization.
The chip implementation can successfully run the model with only 14$uJ$ per
decision. When compared to the state-of-the-art works, the presented design has
higher energy efficiency with additional on-chip model customization
capabilities for higher accuracy.
| [
{
"created": "Tue, 10 May 2022 04:42:20 GMT",
"version": "v1"
}
] | 2022-06-10 | [
[
"Chiang",
"Yu-Hsiang",
""
],
[
"Chang",
"Tian-Sheuan",
""
],
[
"Jou",
"Shyh Jye",
""
]
] | Keyword spotting has gained popularity as a natural way to interact with consumer devices in recent years. However, because of its always-on nature and the variety of speech, it necessitates a low-power design as well as user customization. This paper describes a low-power, energy-efficient keyword spotting accelerator with SRAM based in-memory computing (IMC) and on-chip learning for user customization. However, IMC is constrained by macro size, limited precision, and non-ideal effects. To address the issues mentioned above, this paper proposes bias compensation and fine-tuning using an IMC-aware model design. Furthermore, because learning with low-precision edge devices results in zero error and gradient values due to quantization, this paper proposes error scaling and small gradient accumulation to achieve the same accuracy as ideal model training. The simulation results show that with user customization, we can recover the accuracy loss from 51.08\% to 89.76\% with compensation and fine-tuning and further improve to 96.71\% with customization. The chip implementation can successfully run the model with only 14$uJ$ per decision. When compared to the state-of-the-art works, the presented design has higher energy efficiency with additional on-chip model customization capabilities for higher accuracy. |
2002.11383 | Tai Do Duc | Tai Do Duc, Shuo Shao, Chaoping Xing | Symmetric uncoded caching schemes with low subpacketization levels | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Caching is a commonly used technique in content-delivery networks which aims
to deliver information from hosting servers to users in the most efficient way.
In 2014, Maddah-Ali and Niessen formulated caching into a formal information
theoretic problem and it has gained a lot of attention since then. It is known
that the caching schemes proposed by Ali-Niesen and Yu et. al. are optimal,
that is, they require the least number of transmissions from the server to
satisfy all users' demands. However for these schemes to work, each file needs
to be partitioned into $F^*$ subfiles ($F^*$ is called the subpacketization
level of files) with $F^*$ growing exponentially in the number $K$ of users. As
a result, it is problematic to apply these schemes in practical situations,
where $K$ tends to be very large. There rise the following questions: (1) are
there optimal schemes in which each file is partitioned into $F$ subfiles,
where $F$ is not exponential, say polynomial for example, in $K$? (2) if the
answer to this question is no, is there a near-optimal scheme, a scheme which
is as asymptotically good as the one in \cite{ali1,yu}, with $F$ polynomial in
$K$? Both these questions are open.
Our main contribution in this paper is to provide answers to above questions.
Firstly, we prove that under some mild restriction on user's cache rate, there
are no optimal schemes with $F$ smaller than $F^*$. Moreover, we give necessary
and sufficient conditions for the existence of optimal schemes in this case.
Secondly, we provide an affirmative answer to the second question raised above
by an explicit construction and a detailed performance analysis.
| [
{
"created": "Wed, 26 Feb 2020 09:55:38 GMT",
"version": "v1"
}
] | 2020-02-27 | [
[
"Duc",
"Tai Do",
""
],
[
"Shao",
"Shuo",
""
],
[
"Xing",
"Chaoping",
""
]
] | Caching is a commonly used technique in content-delivery networks which aims to deliver information from hosting servers to users in the most efficient way. In 2014, Maddah-Ali and Niessen formulated caching into a formal information theoretic problem and it has gained a lot of attention since then. It is known that the caching schemes proposed by Ali-Niesen and Yu et. al. are optimal, that is, they require the least number of transmissions from the server to satisfy all users' demands. However for these schemes to work, each file needs to be partitioned into $F^*$ subfiles ($F^*$ is called the subpacketization level of files) with $F^*$ growing exponentially in the number $K$ of users. As a result, it is problematic to apply these schemes in practical situations, where $K$ tends to be very large. There rise the following questions: (1) are there optimal schemes in which each file is partitioned into $F$ subfiles, where $F$ is not exponential, say polynomial for example, in $K$? (2) if the answer to this question is no, is there a near-optimal scheme, a scheme which is as asymptotically good as the one in \cite{ali1,yu}, with $F$ polynomial in $K$? Both these questions are open. Our main contribution in this paper is to provide answers to above questions. Firstly, we prove that under some mild restriction on user's cache rate, there are no optimal schemes with $F$ smaller than $F^*$. Moreover, we give necessary and sufficient conditions for the existence of optimal schemes in this case. Secondly, we provide an affirmative answer to the second question raised above by an explicit construction and a detailed performance analysis. |
2005.10783 | Leighton Barnes | Leighton Pate Barnes, Wei-Ning Chen, and Ayfer Ozgur | Fisher information under local differential privacy | null | null | null | null | cs.IT math.IT math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop data processing inequalities that describe how Fisher information
from statistical samples can scale with the privacy parameter $\varepsilon$
under local differential privacy constraints. These bounds are valid under
general conditions on the distribution of the score of the statistical model,
and they elucidate under which conditions the dependence on $\varepsilon$ is
linear, quadratic, or exponential. We show how these inequalities imply order
optimal lower bounds for private estimation for both the Gaussian location
model and discrete distribution estimation for all levels of privacy
$\varepsilon>0$. We further apply these inequalities to sparse Bernoulli models
and demonstrate privacy mechanisms and estimators with order-matching squared
$\ell^2$ error.
| [
{
"created": "Thu, 21 May 2020 17:05:09 GMT",
"version": "v1"
}
] | 2020-05-22 | [
[
"Barnes",
"Leighton Pate",
""
],
[
"Chen",
"Wei-Ning",
""
],
[
"Ozgur",
"Ayfer",
""
]
] | We develop data processing inequalities that describe how Fisher information from statistical samples can scale with the privacy parameter $\varepsilon$ under local differential privacy constraints. These bounds are valid under general conditions on the distribution of the score of the statistical model, and they elucidate under which conditions the dependence on $\varepsilon$ is linear, quadratic, or exponential. We show how these inequalities imply order optimal lower bounds for private estimation for both the Gaussian location model and discrete distribution estimation for all levels of privacy $\varepsilon>0$. We further apply these inequalities to sparse Bernoulli models and demonstrate privacy mechanisms and estimators with order-matching squared $\ell^2$ error. |
cs/0407031 | Pavel Naumov | Pavel Naumov | On Modal Logics of Partial Recursive Functions | null | null | null | null | cs.LO | null | The classical propositional logic is known to be sound and complete with
respect to the set semantics that interprets connectives as set operations. The
paper extends propositional language by a new binary modality that corresponds
to partial recursive function type constructor under the above interpretation.
The cases of deterministic and non-deterministic functions are considered and
for both of them semantically complete modal logics are described and
decidability of these logics is established.
| [
{
"created": "Mon, 12 Jul 2004 22:53:33 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Naumov",
"Pavel",
""
]
] | The classical propositional logic is known to be sound and complete with respect to the set semantics that interprets connectives as set operations. The paper extends propositional language by a new binary modality that corresponds to partial recursive function type constructor under the above interpretation. The cases of deterministic and non-deterministic functions are considered and for both of them semantically complete modal logics are described and decidability of these logics is established. |
2105.02501 | Fan Bai | Fan Bai, Jiaxiang Wu, Pengcheng Shen, Shaoxin Li and Shuigeng Zhou | Federated Face Recognition | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition has been extensively studied in computer vision and
artificial intelligence communities in recent years. An important issue of face
recognition is data privacy, which receives more and more public concerns. As a
common privacy-preserving technique, Federated Learning is proposed to train a
model cooperatively without sharing data between parties. However, as far as we
know, it has not been successfully applied in face recognition. This paper
proposes a framework named FedFace to innovate federated learning for face
recognition. Specifically, FedFace relies on two major innovative algorithms,
Partially Federated Momentum (PFM) and Federated Validation (FV). PFM locally
applies an estimated equivalent global momentum to approximating the
centralized momentum-SGD efficiently. FV repeatedly searches for better
federated aggregating weightings via testing the aggregated models on some
private validation datasets, which can improve the model's generalization
ability. The ablation study and extensive experiments validate the
effectiveness of the FedFace method and show that it is comparable to or even
better than the centralized baseline in performance.
| [
{
"created": "Thu, 6 May 2021 08:07:25 GMT",
"version": "v1"
}
] | 2021-05-07 | [
[
"Bai",
"Fan",
""
],
[
"Wu",
"Jiaxiang",
""
],
[
"Shen",
"Pengcheng",
""
],
[
"Li",
"Shaoxin",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | Face recognition has been extensively studied in computer vision and artificial intelligence communities in recent years. An important issue of face recognition is data privacy, which receives more and more public concerns. As a common privacy-preserving technique, Federated Learning is proposed to train a model cooperatively without sharing data between parties. However, as far as we know, it has not been successfully applied in face recognition. This paper proposes a framework named FedFace to innovate federated learning for face recognition. Specifically, FedFace relies on two major innovative algorithms, Partially Federated Momentum (PFM) and Federated Validation (FV). PFM locally applies an estimated equivalent global momentum to approximating the centralized momentum-SGD efficiently. FV repeatedly searches for better federated aggregating weightings via testing the aggregated models on some private validation datasets, which can improve the model's generalization ability. The ablation study and extensive experiments validate the effectiveness of the FedFace method and show that it is comparable to or even better than the centralized baseline in performance. |
1904.11137 | Imil Hamda Imran | Imil Hamda Imran, Zhiyong Chen, Lijun Zhu, and Minyue Fu | A Distributed Adaptive Scheme for Multi-Agent Systems | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In traditional adaptive control, the certainty equivalence principle suggests
a two-step design scheme. A controller is first designed for the ideal
situation assuming the uncertain parameter was known and it renders a Lyapunov
function. Then, the uncertain parameter in the controller is replaced by its
estimation that is updated by an adaptive law along the gradient of Lyapunov
function. This principle does not generally work for a multi-agent system as an
adaptive law based on the gradient of (centrally constructed) Lyapunov function
cannot be implemented in a distributed fashion, except for limited situations.
In this paper, we propose a novel distributed adaptive scheme, not relying on
gradient of Lyapunov function, for general multi-agent systems. In this scheme,
asymptotic consensus of a second-order uncertain multi-agent system is achieved
in a network of directed graph.
| [
{
"created": "Thu, 25 Apr 2019 03:18:59 GMT",
"version": "v1"
}
] | 2019-04-26 | [
[
"Imran",
"Imil Hamda",
""
],
[
"Chen",
"Zhiyong",
""
],
[
"Zhu",
"Lijun",
""
],
[
"Fu",
"Minyue",
""
]
] | In traditional adaptive control, the certainty equivalence principle suggests a two-step design scheme. A controller is first designed for the ideal situation assuming the uncertain parameter was known and it renders a Lyapunov function. Then, the uncertain parameter in the controller is replaced by its estimation that is updated by an adaptive law along the gradient of Lyapunov function. This principle does not generally work for a multi-agent system as an adaptive law based on the gradient of (centrally constructed) Lyapunov function cannot be implemented in a distributed fashion, except for limited situations. In this paper, we propose a novel distributed adaptive scheme, not relying on gradient of Lyapunov function, for general multi-agent systems. In this scheme, asymptotic consensus of a second-order uncertain multi-agent system is achieved in a network of directed graph. |
2210.06110 | Moritz Einfalt | Moritz Einfalt, Katja Ludwig, Rainer Lienhart | Uplift and Upsample: Efficient 3D Human Pose Estimation with Uplifting
Transformers | Accepted at IEEE/CVF WACV 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The state-of-the-art for monocular 3D human pose estimation in videos is
dominated by the paradigm of 2D-to-3D pose uplifting. While the uplifting
methods themselves are rather efficient, the true computational complexity
depends on the per-frame 2D pose estimation. In this paper, we present a
Transformer-based pose uplifting scheme that can operate on temporally sparse
2D pose sequences but still produce temporally dense 3D pose estimates. We show
how masked token modeling can be utilized for temporal upsampling within
Transformer blocks. This allows to decouple the sampling rate of input 2D poses
and the target frame rate of the video and drastically decreases the total
computational complexity. Additionally, we explore the option of pre-training
on large motion capture archives, which has been largely neglected so far. We
evaluate our method on two popular benchmark datasets: Human3.6M and
MPI-INF-3DHP. With an MPJPE of 45.0 mm and 46.9 mm, respectively, our proposed
method can compete with the state-of-the-art while reducing inference time by a
factor of 12. This enables real-time throughput with variable consumer hardware
in stationary and mobile applications. We release our code and models at
https://github.com/goldbricklemon/uplift-upsample-3dhpe
| [
{
"created": "Wed, 12 Oct 2022 12:00:56 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2022 09:23:56 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Oct 2022 10:09:40 GMT",
"version": "v3"
}
] | 2022-10-24 | [
[
"Einfalt",
"Moritz",
""
],
[
"Ludwig",
"Katja",
""
],
[
"Lienhart",
"Rainer",
""
]
] | The state-of-the-art for monocular 3D human pose estimation in videos is dominated by the paradigm of 2D-to-3D pose uplifting. While the uplifting methods themselves are rather efficient, the true computational complexity depends on the per-frame 2D pose estimation. In this paper, we present a Transformer-based pose uplifting scheme that can operate on temporally sparse 2D pose sequences but still produce temporally dense 3D pose estimates. We show how masked token modeling can be utilized for temporal upsampling within Transformer blocks. This allows to decouple the sampling rate of input 2D poses and the target frame rate of the video and drastically decreases the total computational complexity. Additionally, we explore the option of pre-training on large motion capture archives, which has been largely neglected so far. We evaluate our method on two popular benchmark datasets: Human3.6M and MPI-INF-3DHP. With an MPJPE of 45.0 mm and 46.9 mm, respectively, our proposed method can compete with the state-of-the-art while reducing inference time by a factor of 12. This enables real-time throughput with variable consumer hardware in stationary and mobile applications. We release our code and models at https://github.com/goldbricklemon/uplift-upsample-3dhpe |
2302.07324 | Chenglei Si | Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu,
Maosong Sun | READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input
Noises | ACL 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | For many real-world applications, the user-generated inputs usually contain
various noises due to speech recognition errors caused by linguistic
variations1 or typographical errors (typos). Thus, it is crucial to test model
performance on data with realistic input noises to ensure robustness and
fairness. However, little study has been done to construct such benchmarks for
Chinese, where various language-specific input noises happen in the real world.
In order to fill this important gap, we construct READIN: a Chinese multi-task
benchmark with REalistic And Diverse Input Noises. READIN contains four diverse
tasks and requests annotators to re-enter the original test data with two
commonly used Chinese input methods: Pinyin input and speech input. We designed
our annotation pipeline to maximize diversity, for example by instructing the
annotators to use diverse input method editors (IMEs) for keyboard noises and
recruiting speakers from diverse dialectical groups for speech noises. We
experiment with a series of strong pretrained language models as well as robust
training methods, we find that these models often suffer significant
performance drops on READIN even with robustness methods like data
augmentation. As the first large-scale attempt in creating a benchmark with
noises geared towards user-generated inputs, we believe that READIN serves as
an important complement to existing Chinese NLP benchmarks. The source code and
dataset can be obtained from https://github.com/thunlp/READIN.
| [
{
"created": "Tue, 14 Feb 2023 20:14:39 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2023 01:04:08 GMT",
"version": "v2"
}
] | 2023-05-26 | [
[
"Si",
"Chenglei",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Chen",
"Yingfa",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
] | For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations1 or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from https://github.com/thunlp/READIN. |
1912.05047 | Yi Ren | Namwoo Kang, Yi Ren, Fred Feinberg, and Panos Papalambros | Form + Function: Optimizing Aesthetic Product Design via Adaptive,
Geometrized Preference Elicitation | submitted to Marketing Science | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual design is critical to product success, and the subject of intensive
marketing research effort. Yet visual elements, due to their holistic and
interactive nature, do not lend themselves well to optimization using extant
decompositional methods for preference elicitation. Here we present a
systematic methodology to incorporate interactive, 3D-rendered product
configurations into a conjoint-like framework. The method relies on rapid,
scalable machine learning algorithms to adaptively update product designs along
with standard information-oriented product attributes. At its heart is a
parametric account of a product's geometry, along with a novel, adaptive
"bi-level" query task that can estimate individuals' visual design form
preferences and their trade-offs against such traditional elements as price and
product features. We illustrate the method's performance through extensive
simulations and robustness checks, a formal proof of the bi-level query
methodology's domain of superiority, and a field test for the design of a
mid-priced sedan, using real-time 3D rendering for an online panel. Results
indicate not only substantially enhanced predictive accuracy, but two
quantities beyond the reach of standard conjoint methods: trade-offs between
form and function overall, and willingness-to-pay for specific design elements.
Moreover -- and most critically for applications -- the method provides
"optimal" visual designs for both individuals and model-derived or
analyst-supplied consumer groupings, as well as their sensitivities to form and
functional elements.
| [
{
"created": "Tue, 10 Dec 2019 23:36:49 GMT",
"version": "v1"
}
] | 2019-12-12 | [
[
"Kang",
"Namwoo",
""
],
[
"Ren",
"Yi",
""
],
[
"Feinberg",
"Fred",
""
],
[
"Papalambros",
"Panos",
""
]
] | Visual design is critical to product success, and the subject of intensive marketing research effort. Yet visual elements, due to their holistic and interactive nature, do not lend themselves well to optimization using extant decompositional methods for preference elicitation. Here we present a systematic methodology to incorporate interactive, 3D-rendered product configurations into a conjoint-like framework. The method relies on rapid, scalable machine learning algorithms to adaptively update product designs along with standard information-oriented product attributes. At its heart is a parametric account of a product's geometry, along with a novel, adaptive "bi-level" query task that can estimate individuals' visual design form preferences and their trade-offs against such traditional elements as price and product features. We illustrate the method's performance through extensive simulations and robustness checks, a formal proof of the bi-level query methodology's domain of superiority, and a field test for the design of a mid-priced sedan, using real-time 3D rendering for an online panel. Results indicate not only substantially enhanced predictive accuracy, but two quantities beyond the reach of standard conjoint methods: trade-offs between form and function overall, and willingness-to-pay for specific design elements. Moreover -- and most critically for applications -- the method provides "optimal" visual designs for both individuals and model-derived or analyst-supplied consumer groupings, as well as their sensitivities to form and functional elements. |
2305.18183 | Gowtham Reddy Abbavaram | Abbavaram Gowtham Reddy, Saketh Bachu, Saloni Dash, Charchit Sharma,
Amit Sharma, Vineeth N Balasubramanian | On Counterfactual Data Augmentation Under Confounding | null | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by/4.0/ | Counterfactual data augmentation has recently emerged as a method to mitigate
confounding biases in the training data. These biases, such as spurious
correlations, arise due to various observed and unobserved confounding
variables in the data generation process. In this paper, we formally analyze
how confounding biases impact downstream classifiers and present a causal
viewpoint to the solutions based on counterfactual data augmentation. We
explore how removing confounding biases serves as a means to learn invariant
features, ultimately aiding in generalization beyond the observed data
distribution. Additionally, we present a straightforward yet powerful algorithm
for generating counterfactual images, which effectively mitigates the influence
of confounding effects on downstream classifiers. Through experiments on MNIST
variants and the CelebA datasets, we demonstrate how our simple augmentation
method helps existing state-of-the-art methods achieve good results.
| [
{
"created": "Mon, 29 May 2023 16:20:23 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Nov 2023 09:11:38 GMT",
"version": "v2"
}
] | 2023-11-22 | [
[
"Reddy",
"Abbavaram Gowtham",
""
],
[
"Bachu",
"Saketh",
""
],
[
"Dash",
"Saloni",
""
],
[
"Sharma",
"Charchit",
""
],
[
"Sharma",
"Amit",
""
],
[
"Balasubramanian",
"Vineeth N",
""
]
] | Counterfactual data augmentation has recently emerged as a method to mitigate confounding biases in the training data. These biases, such as spurious correlations, arise due to various observed and unobserved confounding variables in the data generation process. In this paper, we formally analyze how confounding biases impact downstream classifiers and present a causal viewpoint to the solutions based on counterfactual data augmentation. We explore how removing confounding biases serves as a means to learn invariant features, ultimately aiding in generalization beyond the observed data distribution. Additionally, we present a straightforward yet powerful algorithm for generating counterfactual images, which effectively mitigates the influence of confounding effects on downstream classifiers. Through experiments on MNIST variants and the CelebA datasets, we demonstrate how our simple augmentation method helps existing state-of-the-art methods achieve good results. |
2301.01676 | Prerona Chatterjee | Prerona Chatterjee, Pavel Hrube\v{s} | New Lower Bounds against Homogeneous Non-Commutative Circuits | null | null | null | null | cs.CC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We give several new lower bounds on size of homogeneous non-commutative
circuits. We present an explicit homogeneous bivariate polynomial of degree $d$
which requires homogeneous non-commutative circuit of size $\Omega(d/\log d)$.
For an $n$-variate polynomial with $n>1$, the result can be improved to
$\Omega(nd)$, if $d\leq n$, or $\Omega(nd \frac{\log n}{\log d})$, if $d\geq
n$.
Under the same assumptions, we also give a quadratic lower bound for the
ordered version of the central symmetric polynomial.
| [
{
"created": "Wed, 4 Jan 2023 16:00:56 GMT",
"version": "v1"
}
] | 2023-01-05 | [
[
"Chatterjee",
"Prerona",
""
],
[
"Hrubeš",
"Pavel",
""
]
] | We give several new lower bounds on size of homogeneous non-commutative circuits. We present an explicit homogeneous bivariate polynomial of degree $d$ which requires homogeneous non-commutative circuit of size $\Omega(d/\log d)$. For an $n$-variate polynomial with $n>1$, the result can be improved to $\Omega(nd)$, if $d\leq n$, or $\Omega(nd \frac{\log n}{\log d})$, if $d\geq n$. Under the same assumptions, we also give a quadratic lower bound for the ordered version of the central symmetric polynomial. |
2404.11401 | Xianqiang Lyu | Xianqiang Lyu and Hui Liu and Junhui Hou | RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled
Neural Rendering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose RainyScape, an unsupervised framework for reconstructing clean
scenes from a collection of multi-view rainy images. RainyScape consists of two
main modules: a neural rendering module and a rain-prediction module that
incorporates a predictor network and a learnable latent embedding that captures
the rain characteristics of the scene. Specifically, based on the spectral bias
property of neural networks, we first optimize the neural rendering pipeline to
obtain a low-frequency scene representation. Subsequently, we jointly optimize
the two modules, driven by the proposed adaptive direction-sensitive
gradient-based reconstruction loss, which encourages the network to distinguish
between scene details and rain streaks, facilitating the propagation of
gradients to the relevant components. Extensive experiments on both the classic
neural radiance field and the recently proposed 3D Gaussian splatting
demonstrate the superiority of our method in effectively eliminating rain
streaks and rendering clean images, achieving state-of-the-art performance. The
constructed high-quality dataset and source code will be publicly available.
| [
{
"created": "Wed, 17 Apr 2024 14:07:22 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Lyu",
"Xianqiang",
""
],
[
"Liu",
"Hui",
""
],
[
"Hou",
"Junhui",
""
]
] | We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. RainyScape consists of two main modules: a neural rendering module and a rain-prediction module that incorporates a predictor network and a learnable latent embedding that captures the rain characteristics of the scene. Specifically, based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation. Subsequently, we jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks, facilitating the propagation of gradients to the relevant components. Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of our method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance. The constructed high-quality dataset and source code will be publicly available. |
1710.06323 | Anna-Lena Horlemann-Trautmann | Heide Gluesing-Luerssen and Anna-Lena Horlemann-Trautmann | Symbol Erasure Correction Capability of Spread Codes | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider data transmission over a network where each edge is an erasure
channel and where the inner nodes transmit a random linear combination of their
incoming information. We distinguish two channel models in this setting, the
row and the column erasure channel model. For both models we derive the symbol
erasure correction capabilities of spread codes and compare them to other known
codes suitable for those models. Furthermore, we explain how to decode these
codes in the two channel models and compare their decoding complexities. The
results show that, depending on the application and the to-be-optimized aspect,
any combination of codes and channel models can be the best choice.
| [
{
"created": "Tue, 17 Oct 2017 14:52:06 GMT",
"version": "v1"
}
] | 2017-10-18 | [
[
"Gluesing-Luerssen",
"Heide",
""
],
[
"Horlemann-Trautmann",
"Anna-Lena",
""
]
] | We consider data transmission over a network where each edge is an erasure channel and where the inner nodes transmit a random linear combination of their incoming information. We distinguish two channel models in this setting, the row and the column erasure channel model. For both models we derive the symbol erasure correction capabilities of spread codes and compare them to other known codes suitable for those models. Furthermore, we explain how to decode these codes in the two channel models and compare their decoding complexities. The results show that, depending on the application and the to-be-optimized aspect, any combination of codes and channel models can be the best choice. |
2305.15091 | Wadii Boulila Prof. | Zouhayra Ayadi, Wadii Boulila, Imed Riadh Farah | Modeling Complex Object Changes in Satellite Image Time-Series: Approach
based on CSP and Spatiotemporal Graph | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a method for automatically monitoring and analyzing the
evolution of complex geographic objects. The objects are modeled as a
spatiotemporal graph, which separates filiation relations, spatial relations,
and spatiotemporal relations, and is analyzed by detecting frequent sub-graphs
using constraint satisfaction problems (CSP). The process is divided into four
steps: first, the identification of complex objects in each satellite image;
second, the construction of a spatiotemporal graph to model the spatiotemporal
changes of the complex objects; third, the creation of sub-graphs to be
detected in the base spatiotemporal graph; and fourth, the analysis of the
spatiotemporal graph by detecting the sub-graphs and solving a constraint
network to determine relevant sub-graphs. The final step is further broken down
into two sub-steps: (i) the modeling of the constraint network with defined
variables and constraints, and (ii) the solving of the constraint network to
find relevant sub-graphs in the spatiotemporal graph. Experiments were
conducted using real-world satellite images representing several cities in
Saudi Arabia, and the results demonstrate the effectiveness of the proposed
approach.
| [
{
"created": "Wed, 24 May 2023 12:15:19 GMT",
"version": "v1"
}
] | 2023-05-25 | [
[
"Ayadi",
"Zouhayra",
""
],
[
"Boulila",
"Wadii",
""
],
[
"Farah",
"Imed Riadh",
""
]
] | This paper proposes a method for automatically monitoring and analyzing the evolution of complex geographic objects. The objects are modeled as a spatiotemporal graph, which separates filiation relations, spatial relations, and spatiotemporal relations, and is analyzed by detecting frequent sub-graphs using constraint satisfaction problems (CSP). The process is divided into four steps: first, the identification of complex objects in each satellite image; second, the construction of a spatiotemporal graph to model the spatiotemporal changes of the complex objects; third, the creation of sub-graphs to be detected in the base spatiotemporal graph; and fourth, the analysis of the spatiotemporal graph by detecting the sub-graphs and solving a constraint network to determine relevant sub-graphs. The final step is further broken down into two sub-steps: (i) the modeling of the constraint network with defined variables and constraints, and (ii) the solving of the constraint network to find relevant sub-graphs in the spatiotemporal graph. Experiments were conducted using real-world satellite images representing several cities in Saudi Arabia, and the results demonstrate the effectiveness of the proposed approach. |
2401.17835 | Tankred Saanum | Tankred Saanum, Peter Dayan, Eric Schulz | Predicting the Future with Simple World Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | World models can represent potentially high-dimensional pixel observations in
compact latent spaces, making it tractable to model the dynamics of the
environment. However, the latent dynamics inferred by these models may still be
highly complex. Abstracting the dynamics of the environment with simple models
can have several benefits. If the latent dynamics are simple, the model may
generalize better to novel transitions, and discover useful latent
representations of environment states. We propose a regularization scheme that
simplifies the world model's latent dynamics. Our model, the Parsimonious
Latent Space Model (PLSM), minimizes the mutual information between latent
states and the dynamics that arise between them. This makes the dynamics softly
state-invariant, and the effects of the agent's actions more predictable. We
combine the PLSM with three different model classes used for i) future latent
state prediction, ii) video prediction, and iii) planning. We find that our
regularization improves accuracy, generalization, and performance in downstream
tasks.
| [
{
"created": "Wed, 31 Jan 2024 13:52:11 GMT",
"version": "v1"
}
] | 2024-02-01 | [
[
"Saanum",
"Tankred",
""
],
[
"Dayan",
"Peter",
""
],
[
"Schulz",
"Eric",
""
]
] | World models can represent potentially high-dimensional pixel observations in compact latent spaces, making it tractable to model the dynamics of the environment. However, the latent dynamics inferred by these models may still be highly complex. Abstracting the dynamics of the environment with simple models can have several benefits. If the latent dynamics are simple, the model may generalize better to novel transitions, and discover useful latent representations of environment states. We propose a regularization scheme that simplifies the world model's latent dynamics. Our model, the Parsimonious Latent Space Model (PLSM), minimizes the mutual information between latent states and the dynamics that arise between them. This makes the dynamics softly state-invariant, and the effects of the agent's actions more predictable. We combine the PLSM with three different model classes used for i) future latent state prediction, ii) video prediction, and iii) planning. We find that our regularization improves accuracy, generalization, and performance in downstream tasks. |
2007.06677 | Elizabeth Polgreen | Nicolas Chan, Elizabeth Polgreen and Sanjit A. Seshia | Gradient Descent over Metagrammars for Syntax-Guided Synthesis | 5 pages, SYNT 2020 | null | null | null | cs.SE cs.AI cs.LG cs.PL | http://creativecommons.org/licenses/by/4.0/ | The performance of a syntax-guided synthesis algorithm is highly dependent on
the provision of a good syntactic template, or grammar. Provision of such a
template is often left to the user to do manually, though in the absence of
such a grammar, state-of-the-art solvers will provide their own default
grammar, which is dependent on the signature of the target program to be
sythesized. In this work, we speculate this default grammar could be improved
upon substantially. We build sets of rules, or metagrammars, for constructing
grammars, and perform a gradient descent over these metagrammars aiming to find
a metagrammar which solves more benchmarks and on average faster. We show the
resulting metagrammar enables CVC4 to solve 26% more benchmarks than the
default grammar within a 300s time-out, and that metagrammars learnt from tens
of benchmarks generalize to performance on 100s of benchmarks.
| [
{
"created": "Mon, 13 Jul 2020 20:37:35 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jul 2020 18:31:10 GMT",
"version": "v2"
}
] | 2020-07-20 | [
[
"Chan",
"Nicolas",
""
],
[
"Polgreen",
"Elizabeth",
""
],
[
"Seshia",
"Sanjit A.",
""
]
] | The performance of a syntax-guided synthesis algorithm is highly dependent on the provision of a good syntactic template, or grammar. Provision of such a template is often left to the user to do manually, though in the absence of such a grammar, state-of-the-art solvers will provide their own default grammar, which is dependent on the signature of the target program to be sythesized. In this work, we speculate this default grammar could be improved upon substantially. We build sets of rules, or metagrammars, for constructing grammars, and perform a gradient descent over these metagrammars aiming to find a metagrammar which solves more benchmarks and on average faster. We show the resulting metagrammar enables CVC4 to solve 26% more benchmarks than the default grammar within a 300s time-out, and that metagrammars learnt from tens of benchmarks generalize to performance on 100s of benchmarks. |
1607.02537 | Heng Fan | Heng Fan, Xue Mei, Danil Prokhorov and Haibin Ling | Multi-level Contextual RNNs with Attention Model for Scene Labeling | 8 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context in image is crucial for scene labeling while existing methods only
exploit local context generated from a small surrounding area of an image patch
or a pixel, by contrast long-range and global contextual information is
ignored. To handle this issue, we in this work propose a novel approach for
scene labeling by exploring multi-level contextual recurrent neural networks
(ML-CRNNs). Specifically, we encode three kinds of contextual cues, i.e., local
context, global context and image topic context in structural recurrent neural
networks (RNNs) to model long-range local and global dependencies in image. In
this way, our method is able to `see' the image in terms of both long-range
local and holistic views, and make a more reliable inference for image
labeling. Besides, we integrate the proposed contextual RNNs into hierarchical
convolutional neural networks (CNNs), and exploit dependence relationships in
multiple levels to provide rich spatial and semantic information. Moreover, we
novelly adopt an attention model to effectively merge multiple levels and show
that it outperforms average- or max-pooling fusion strategies. Extensive
experiments demonstrate that the proposed approach achieves new
state-of-the-art results on the CamVid, SiftFlow and Stanford-background
datasets.
| [
{
"created": "Fri, 8 Jul 2016 21:51:53 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Aug 2016 21:15:51 GMT",
"version": "v2"
}
] | 2016-08-12 | [
[
"Fan",
"Heng",
""
],
[
"Mei",
"Xue",
""
],
[
"Prokhorov",
"Danil",
""
],
[
"Ling",
"Haibin",
""
]
] | Context in image is crucial for scene labeling while existing methods only exploit local context generated from a small surrounding area of an image patch or a pixel, by contrast long-range and global contextual information is ignored. To handle this issue, we in this work propose a novel approach for scene labeling by exploring multi-level contextual recurrent neural networks (ML-CRNNs). Specifically, we encode three kinds of contextual cues, i.e., local context, global context and image topic context in structural recurrent neural networks (RNNs) to model long-range local and global dependencies in image. In this way, our method is able to `see' the image in terms of both long-range local and holistic views, and make a more reliable inference for image labeling. Besides, we integrate the proposed contextual RNNs into hierarchical convolutional neural networks (CNNs), and exploit dependence relationships in multiple levels to provide rich spatial and semantic information. Moreover, we novelly adopt an attention model to effectively merge multiple levels and show that it outperforms average- or max-pooling fusion strategies. Extensive experiments demonstrate that the proposed approach achieves new state-of-the-art results on the CamVid, SiftFlow and Stanford-background datasets. |
2406.18383 | Santiago Figueira | Ver\'onica Becher, Olivier Carton and Santiago Figueira | Rauzy dimension and finite-state dimension | null | null | null | null | cs.IT cs.FL math.IT | http://creativecommons.org/licenses/by/4.0/ | In a paper of 1976, Rauzy studied two complexity notions, $\underline{\beta}$
and $\overline{\beta}$, for infinite sequences over a finite alphabet. The
function $\underline{\beta}$ is maximum exactly in the Borel normal sequences
and $\overline{\beta}$ is minimum exactly in the sequences that, when added to
any Borel normal sequence, the result is also Borel normal. Although the
definition of $\underline{\beta}$ and $\overline{\beta}$ do not involve
finite-state automata, we establish some connections between them and the lower
$\underline{\rm dim}$ and upper $\overline{\rm dim}$ finite-state dimension (or
other equivalent notions like finite-state compression ratio, aligned-entropy
or cumulative log-loss of finite-state predictors). We show tight lower and
upper bounds on $\underline{\rm dim}$ and $\overline{\rm dim}$ as functions of
$\underline{\beta}$ and $\overline{\beta}$, respectively. In particular this
implies that sequences with $\overline{\rm dim}$ zero are exactly the ones that
that, when added to any Borel normal sequence, the result is also Borel normal.
We also show that the finite-state dimensions $\underline{\rm dim}$ and
$\overline{\rm dim}$ are essentially subadditive. We need two technical tools
that are of independent interest. One is the family of local finite-state
automata, which are automata whose memory consists of the last $k$ read symbols
for some fixed integer $k$. We show that compressors based on local
finite-state automata are as good as standard finite-state compressors. The
other one is a notion of finite-state relational (non-deterministic)
compressor, which can compress an input in several ways provided the input can
always be recovered from any of its outputs. We show that such compressors
cannot compress more than standard (deterministic) finite-state compressors.
| [
{
"created": "Wed, 26 Jun 2024 14:24:58 GMT",
"version": "v1"
}
] | 2024-06-27 | [
[
"Becher",
"Verónica",
""
],
[
"Carton",
"Olivier",
""
],
[
"Figueira",
"Santiago",
""
]
] | In a paper of 1976, Rauzy studied two complexity notions, $\underline{\beta}$ and $\overline{\beta}$, for infinite sequences over a finite alphabet. The function $\underline{\beta}$ is maximum exactly in the Borel normal sequences and $\overline{\beta}$ is minimum exactly in the sequences that, when added to any Borel normal sequence, the result is also Borel normal. Although the definition of $\underline{\beta}$ and $\overline{\beta}$ do not involve finite-state automata, we establish some connections between them and the lower $\underline{\rm dim}$ and upper $\overline{\rm dim}$ finite-state dimension (or other equivalent notions like finite-state compression ratio, aligned-entropy or cumulative log-loss of finite-state predictors). We show tight lower and upper bounds on $\underline{\rm dim}$ and $\overline{\rm dim}$ as functions of $\underline{\beta}$ and $\overline{\beta}$, respectively. In particular this implies that sequences with $\overline{\rm dim}$ zero are exactly the ones that that, when added to any Borel normal sequence, the result is also Borel normal. We also show that the finite-state dimensions $\underline{\rm dim}$ and $\overline{\rm dim}$ are essentially subadditive. We need two technical tools that are of independent interest. One is the family of local finite-state automata, which are automata whose memory consists of the last $k$ read symbols for some fixed integer $k$. We show that compressors based on local finite-state automata are as good as standard finite-state compressors. The other one is a notion of finite-state relational (non-deterministic) compressor, which can compress an input in several ways provided the input can always be recovered from any of its outputs. We show that such compressors cannot compress more than standard (deterministic) finite-state compressors. |
2106.11426 | Zichang Liu | Zichang Liu, Benjamin Coleman, Anshumali Shrivastava | Efficient Inference via Universal LSH Kernel | null | null | null | null | cs.LG cs.AI cs.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large machine learning models achieve unprecedented performance on various
tasks and have evolved as the go-to technique. However, deploying these compute
and memory hungry models on resource constraint environments poses new
challenges. In this work, we propose mathematically provable Representer
Sketch, a concise set of count arrays that can approximate the inference
procedure with simple hashing computations and aggregations. Representer Sketch
builds upon the popular Representer Theorem from kernel literature, hence the
name, providing a generic fundamental alternative to the problem of efficient
inference that goes beyond the popular approach such as quantization, iterative
pruning and knowledge distillation. A neural network function is transformed to
its weighted kernel density representation, which can be very efficiently
estimated with our sketching algorithm. Empirically, we show that Representer
Sketch achieves up to 114x reduction in storage requirement and 59x reduction
in computation complexity without any drop in accuracy.
| [
{
"created": "Mon, 21 Jun 2021 22:06:32 GMT",
"version": "v1"
}
] | 2021-06-23 | [
[
"Liu",
"Zichang",
""
],
[
"Coleman",
"Benjamin",
""
],
[
"Shrivastava",
"Anshumali",
""
]
] | Large machine learning models achieve unprecedented performance on various tasks and have evolved as the go-to technique. However, deploying these compute and memory hungry models on resource constraint environments poses new challenges. In this work, we propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations. Representer Sketch builds upon the popular Representer Theorem from kernel literature, hence the name, providing a generic fundamental alternative to the problem of efficient inference that goes beyond the popular approach such as quantization, iterative pruning and knowledge distillation. A neural network function is transformed to its weighted kernel density representation, which can be very efficiently estimated with our sketching algorithm. Empirically, we show that Representer Sketch achieves up to 114x reduction in storage requirement and 59x reduction in computation complexity without any drop in accuracy. |
1201.0564 | Toby Walsh | Ronald de Haan, Nina Narodytska, Toby Walsh | The RegularGcc Matrix Constraint | Submitted to CPAIOR 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study propagation of the RegularGcc global constraint. This ensures that
each row of a matrix of decision variables satisfies a Regular constraint, and
each column satisfies a Gcc constraint. On the negative side, we prove that
propagation is NP-hard even under some strong restrictions (e.g. just 3 values,
just 4 states in the automaton, or just 5 columns to the matrix). On the
positive side, we identify two cases where propagation is fixed parameter
tractable. In addition, we show how to improve propagation over a simple
decomposition into separate Regular and Gcc constraints by identifying some
necessary but insufficient conditions for a solution. We enforce these
conditions with some additional weighted row automata. Experimental results
demonstrate the potential of these methods on some standard benchmark problems.
| [
{
"created": "Tue, 3 Jan 2012 03:30:18 GMT",
"version": "v1"
}
] | 2016-11-26 | [
[
"de Haan",
"Ronald",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Walsh",
"Toby",
""
]
] | We study propagation of the RegularGcc global constraint. This ensures that each row of a matrix of decision variables satisfies a Regular constraint, and each column satisfies a Gcc constraint. On the negative side, we prove that propagation is NP-hard even under some strong restrictions (e.g. just 3 values, just 4 states in the automaton, or just 5 columns to the matrix). On the positive side, we identify two cases where propagation is fixed parameter tractable. In addition, we show how to improve propagation over a simple decomposition into separate Regular and Gcc constraints by identifying some necessary but insufficient conditions for a solution. We enforce these conditions with some additional weighted row automata. Experimental results demonstrate the potential of these methods on some standard benchmark problems. |
2106.01048 | Conor F. Hayes | Conor F. Hayes, Timothy Verstraeten, Diederik M. Roijers, Enda Howley,
Patrick Mannion | Expected Scalarised Returns Dominance: A New Solution Concept for
Multi-Objective Decision Making | null | null | 10.1007/s00521-022-07334-x | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In many real-world scenarios, the utility of a user is derived from the
single execution of a policy. In this case, to apply multi-objective
reinforcement learning, the expected utility of the returns must be optimised.
Various scenarios exist where a user's preferences over objectives (also known
as the utility function) are unknown or difficult to specify. In such
scenarios, a set of optimal policies must be learned. However, settings where
the expected utility must be maximised have been largely overlooked by the
multi-objective reinforcement learning community and, as a consequence, a set
of optimal solutions has yet to be defined. In this paper we address this
challenge by proposing first-order stochastic dominance as a criterion to build
solution sets to maximise expected utility. We also propose a new dominance
criterion, known as expected scalarised returns (ESR) dominance, that extends
first-order stochastic dominance to allow a set of optimal policies to be
learned in practice. We then define a new solution concept called the ESR set,
which is a set of policies that are ESR dominant. Finally, we define a new
multi-objective distributional tabular reinforcement learning (MOT-DRL)
algorithm to learn the ESR set in a multi-objective multi-armed bandit setting.
| [
{
"created": "Wed, 2 Jun 2021 09:42:42 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Sep 2021 16:46:44 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Jul 2022 13:24:54 GMT",
"version": "v3"
}
] | 2022-07-06 | [
[
"Hayes",
"Conor F.",
""
],
[
"Verstraeten",
"Timothy",
""
],
[
"Roijers",
"Diederik M.",
""
],
[
"Howley",
"Enda",
""
],
[
"Mannion",
"Patrick",
""
]
] | In many real-world scenarios, the utility of a user is derived from the single execution of a policy. In this case, to apply multi-objective reinforcement learning, the expected utility of the returns must be optimised. Various scenarios exist where a user's preferences over objectives (also known as the utility function) are unknown or difficult to specify. In such scenarios, a set of optimal policies must be learned. However, settings where the expected utility must be maximised have been largely overlooked by the multi-objective reinforcement learning community and, as a consequence, a set of optimal solutions has yet to be defined. In this paper we address this challenge by proposing first-order stochastic dominance as a criterion to build solution sets to maximise expected utility. We also propose a new dominance criterion, known as expected scalarised returns (ESR) dominance, that extends first-order stochastic dominance to allow a set of optimal policies to be learned in practice. We then define a new solution concept called the ESR set, which is a set of policies that are ESR dominant. Finally, we define a new multi-objective distributional tabular reinforcement learning (MOT-DRL) algorithm to learn the ESR set in a multi-objective multi-armed bandit setting. |
2408.01215 | Juyoung Yun | Juyoung Yun, Hoyoung Kim, Suin Cho, Hangil Kang | ZNorm: Z-Score Gradient Normalization for Accelerating Neural Network
Training | null | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | The rapid advancements in deep learning necessitate efficient training
methods for deep neural networks (DNNs). As models grow in complexity,
vanishing and exploding gradients impede convergence and performance. We
propose Z-Score Normalization for Gradient Descent (ZNorm), an innovative
technique that adjusts only the gradients to enhance training efficiency and
improve model performance. ZNorm normalizes the overall gradients, providing
consistent gradient scaling across layers, thereby reducing the risks of
vanishing and exploding gradients. Our extensive experiments on CIFAR-10 and
medical datasets demonstrate that ZNorm not only accelerates convergence but
also enhances performance metrics. ZNorm consistently outperforms existing
methods, achieving superior results using the same computational settings. In
medical imaging applications, ZNorm improves tumor prediction and segmentation
performances, underscoring its practical utility. These findings highlight
ZNorm's potential as a robust and versatile tool for improving the efficiency
and effectiveness of deep neural network training across a wide range of
architectures and applications.
| [
{
"created": "Fri, 2 Aug 2024 12:04:19 GMT",
"version": "v1"
}
] | 2024-08-05 | [
[
"Yun",
"Juyoung",
""
],
[
"Kim",
"Hoyoung",
""
],
[
"Cho",
"Suin",
""
],
[
"Kang",
"Hangil",
""
]
] | The rapid advancements in deep learning necessitate efficient training methods for deep neural networks (DNNs). As models grow in complexity, vanishing and exploding gradients impede convergence and performance. We propose Z-Score Normalization for Gradient Descent (ZNorm), an innovative technique that adjusts only the gradients to enhance training efficiency and improve model performance. ZNorm normalizes the overall gradients, providing consistent gradient scaling across layers, thereby reducing the risks of vanishing and exploding gradients. Our extensive experiments on CIFAR-10 and medical datasets demonstrate that ZNorm not only accelerates convergence but also enhances performance metrics. ZNorm consistently outperforms existing methods, achieving superior results using the same computational settings. In medical imaging applications, ZNorm improves tumor prediction and segmentation performances, underscoring its practical utility. These findings highlight ZNorm's potential as a robust and versatile tool for improving the efficiency and effectiveness of deep neural network training across a wide range of architectures and applications. |
1811.02189 | Weijie Kong | Weijie Kong, Nannan Li, Shan Liu, Thomas Li, Ge Li | BLP -- Boundary Likelihood Pinpointing Networks for Accurate Temporal
Action Localization | Accepted to International Conference on Acoustics, Speech, and Signal
Processing (ICASSP), 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite tremendous progress achieved in temporal action detection,
state-of-the-art methods still suffer from the sharp performance deterioration
when localizing the starting and ending temporal action boundaries. Although
most methods apply boundary regression paradigm to tackle this problem, we
argue that the direct regression lacks detailed enough information to yield
accurate temporal boundaries. In this paper, we propose a novel Boundary
Likelihood Pinpointing (BLP) network to alleviate this deficiency of boundary
regression and improve the localization accuracy. Given a loosely localized
search interval that contains an action instance, BLP casts the problem of
localizing temporal boundaries as that of assigning probabilities on each
equally divided unit of this interval. These generated probabilities provide
useful information regarding the boundary location of the action inside this
search interval. Based on these probabilities, we introduce a boundary
pinpointing paradigm to pinpoint the accurate boundaries under a simple
probabilistic framework. Compared with other C3D feature based detectors,
extensive experiments demonstrate that BLP significantly improves the
localization performance of recent state-of-the-art detectors, and achieves
competitive detection mAP on both THUMOS' 14 and ActivityNet datasets,
particularly when the evaluation tIoU is high.
| [
{
"created": "Tue, 6 Nov 2018 06:54:58 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Nov 2018 08:25:27 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Nov 2018 03:33:40 GMT",
"version": "v3"
},
{
"created": "Sat, 9 Feb 2019 14:28:03 GMT",
"version": "v4"
},
{
"created": "Tue, 19 Feb 2019 03:46:11 GMT",
"version": "v5"
},
{
"created": "Mon, 16 Dec 2019 03:09:43 GMT",
"version": "v6"
}
] | 2019-12-17 | [
[
"Kong",
"Weijie",
""
],
[
"Li",
"Nannan",
""
],
[
"Liu",
"Shan",
""
],
[
"Li",
"Thomas",
""
],
[
"Li",
"Ge",
""
]
] | Despite tremendous progress achieved in temporal action detection, state-of-the-art methods still suffer from the sharp performance deterioration when localizing the starting and ending temporal action boundaries. Although most methods apply boundary regression paradigm to tackle this problem, we argue that the direct regression lacks detailed enough information to yield accurate temporal boundaries. In this paper, we propose a novel Boundary Likelihood Pinpointing (BLP) network to alleviate this deficiency of boundary regression and improve the localization accuracy. Given a loosely localized search interval that contains an action instance, BLP casts the problem of localizing temporal boundaries as that of assigning probabilities on each equally divided unit of this interval. These generated probabilities provide useful information regarding the boundary location of the action inside this search interval. Based on these probabilities, we introduce a boundary pinpointing paradigm to pinpoint the accurate boundaries under a simple probabilistic framework. Compared with other C3D feature based detectors, extensive experiments demonstrate that BLP significantly improves the localization performance of recent state-of-the-art detectors, and achieves competitive detection mAP on both THUMOS' 14 and ActivityNet datasets, particularly when the evaluation tIoU is high. |
2108.01064 | Anjum Anjum | Anushka Gupta, Diksha Chugh, Anjum, Rahul Katarya | Automated News Summarization Using Transformers | 10 pages | Sustainable Advanced Computing - Select Proceedings of ICSAC 2021 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The amount of text data available online is increasing at a very fast pace
hence text summarization has become essential. Most of the modern recommender
and text classification systems require going through a huge amount of data.
Manually generating precise and fluent summaries of lengthy articles is a very
tiresome and time-consuming task. Hence generating automated summaries for the
data and using it to train machine learning models will make these models space
and time-efficient. Extractive summarization and abstractive summarization are
two separate methods of generating summaries. The extractive technique
identifies the relevant sentences from the original document and extracts only
those from the text. Whereas in abstractive summarization techniques, the
summary is generated after interpreting the original text, hence making it more
complicated. In this paper, we will be presenting a comprehensive comparison of
a few transformer architecture based pre-trained models for text summarization.
For analysis and comparison, we have used the BBC news dataset that contains
text data that can be used for summarization and human generated summaries for
evaluating and comparing the summaries generated by machine learning models.
| [
{
"created": "Fri, 23 Apr 2021 04:22:33 GMT",
"version": "v1"
}
] | 2021-08-03 | [
[
"Gupta",
"Anushka",
""
],
[
"Chugh",
"Diksha",
""
],
[
"Anjum",
"",
""
],
[
"Katarya",
"Rahul",
""
]
] | The amount of text data available online is increasing at a very fast pace hence text summarization has become essential. Most of the modern recommender and text classification systems require going through a huge amount of data. Manually generating precise and fluent summaries of lengthy articles is a very tiresome and time-consuming task. Hence generating automated summaries for the data and using it to train machine learning models will make these models space and time-efficient. Extractive summarization and abstractive summarization are two separate methods of generating summaries. The extractive technique identifies the relevant sentences from the original document and extracts only those from the text. Whereas in abstractive summarization techniques, the summary is generated after interpreting the original text, hence making it more complicated. In this paper, we will be presenting a comprehensive comparison of a few transformer architecture based pre-trained models for text summarization. For analysis and comparison, we have used the BBC news dataset that contains text data that can be used for summarization and human generated summaries for evaluating and comparing the summaries generated by machine learning models. |
2406.19309 | Mathias Vast | Mathias Vast, Basile Van Cooten, Laure Soulier and Benjamin Piwowarski | Which Neurons Matter in IR? Applying Integrated Gradients-based Methods
to Understand Cross-Encoders | Accepted at ICTIR 2024 | null | 10.1145/3664190.3672528 | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | With the recent addition of Retrieval-Augmented Generation (RAG), the scope
and importance of Information Retrieval (IR) has expanded. As a result, the
importance of a deeper understanding of IR models also increases. However,
interpretability in IR remains under-explored, especially when it comes to the
models' inner mechanisms. In this paper, we explore the possibility of adapting
Integrated Gradient-based methods in an IR context to identify the role of
individual neurons within the model. In particular, we provide new insights
into the role of what we call "relevance" neurons, as well as how they deal
with unseen data. Finally, we carry out an in-depth pruning study to validate
our findings.
| [
{
"created": "Thu, 27 Jun 2024 16:33:40 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2024 15:48:55 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Vast",
"Mathias",
""
],
[
"Van Cooten",
"Basile",
""
],
[
"Soulier",
"Laure",
""
],
[
"Piwowarski",
"Benjamin",
""
]
] | With the recent addition of Retrieval-Augmented Generation (RAG), the scope and importance of Information Retrieval (IR) has expanded. As a result, the importance of a deeper understanding of IR models also increases. However, interpretability in IR remains under-explored, especially when it comes to the models' inner mechanisms. In this paper, we explore the possibility of adapting Integrated Gradient-based methods in an IR context to identify the role of individual neurons within the model. In particular, we provide new insights into the role of what we call "relevance" neurons, as well as how they deal with unseen data. Finally, we carry out an in-depth pruning study to validate our findings. |
2405.05508 | Mingzhu Wang | Mingzhu Wang, Yuzhe Zhang, Qihang Zhao, Juanyi Yang, Hong Zhang | Redefining Information Retrieval of Structured Database via Large
Language Models | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval augmentation is critical when Language Models (LMs) exploit
non-parametric knowledge related to the query through external knowledge bases
before reasoning. The retrieved information is incorporated into LMs as context
alongside the query, enhancing the reliability of responses towards factual
questions. Prior researches in retrieval augmentation typically follow a
retriever-generator paradigm. In this context, traditional retrievers encounter
challenges in precisely and seamlessly extracting query-relevant information
from knowledge bases. To address this issue, this paper introduces a novel
retrieval augmentation framework called ChatLR that primarily employs the
powerful semantic understanding ability of Large Language Models (LLMs) as
retrievers to achieve precise and concise information retrieval. Additionally,
we construct an LLM-based search and question answering system tailored for the
financial domain by fine-tuning LLM on two tasks including Text2API and API-ID
recognition. Experimental results demonstrate the effectiveness of ChatLR in
addressing user queries, achieving an overall information retrieval accuracy
exceeding 98.8\%.
| [
{
"created": "Thu, 9 May 2024 02:37:53 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Wang",
"Mingzhu",
""
],
[
"Zhang",
"Yuzhe",
""
],
[
"Zhao",
"Qihang",
""
],
[
"Yang",
"Juanyi",
""
],
[
"Zhang",
"Hong",
""
]
] | Retrieval augmentation is critical when Language Models (LMs) exploit non-parametric knowledge related to the query through external knowledge bases before reasoning. The retrieved information is incorporated into LMs as context alongside the query, enhancing the reliability of responses towards factual questions. Prior researches in retrieval augmentation typically follow a retriever-generator paradigm. In this context, traditional retrievers encounter challenges in precisely and seamlessly extracting query-relevant information from knowledge bases. To address this issue, this paper introduces a novel retrieval augmentation framework called ChatLR that primarily employs the powerful semantic understanding ability of Large Language Models (LLMs) as retrievers to achieve precise and concise information retrieval. Additionally, we construct an LLM-based search and question answering system tailored for the financial domain by fine-tuning LLM on two tasks including Text2API and API-ID recognition. Experimental results demonstrate the effectiveness of ChatLR in addressing user queries, achieving an overall information retrieval accuracy exceeding 98.8\%. |
2211.00713 | Saurabh Deshpande Mr. | Saurabh Deshpande, St\'ephane P.A. Bordas, Jakub Lengiewicz | MAgNET: A Graph U-Net Architecture for Mesh-Based Simulations | null | Engineering Applications of Artificial Intelligence, Volume 133,
Part B, 2024, 108055 | 10.1016/j.engappai.2024.108055 | null | cs.LG cs.CE | http://creativecommons.org/licenses/by/4.0/ | In many cutting-edge applications, high-fidelity computational models prove
to be too slow for practical use and are therefore replaced by much faster
surrogate models. Recently, deep learning techniques have increasingly been
utilized to accelerate such predictions. To enable learning on
large-dimensional and complex data, specific neural network architectures have
been developed, including convolutional and graph neural networks. In this
work, we present a novel encoder-decoder geometric deep learning framework
called MAgNET, which extends the well-known convolutional neural networks to
accommodate arbitrary graph-structured data. MAgNET consists of innovative
Multichannel Aggregation (MAg) layers and graph pooling/unpooling layers,
forming a graph U-Net architecture that is analogous to convolutional U-Nets.
We demonstrate the predictive capabilities of MAgNET in surrogate modeling for
non-linear finite element simulations in the mechanics of solids.
| [
{
"created": "Tue, 1 Nov 2022 19:23:45 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Mar 2023 13:56:35 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Apr 2024 14:22:26 GMT",
"version": "v3"
}
] | 2024-04-03 | [
[
"Deshpande",
"Saurabh",
""
],
[
"Bordas",
"Stéphane P. A.",
""
],
[
"Lengiewicz",
"Jakub",
""
]
] | In many cutting-edge applications, high-fidelity computational models prove to be too slow for practical use and are therefore replaced by much faster surrogate models. Recently, deep learning techniques have increasingly been utilized to accelerate such predictions. To enable learning on large-dimensional and complex data, specific neural network architectures have been developed, including convolutional and graph neural networks. In this work, we present a novel encoder-decoder geometric deep learning framework called MAgNET, which extends the well-known convolutional neural networks to accommodate arbitrary graph-structured data. MAgNET consists of innovative Multichannel Aggregation (MAg) layers and graph pooling/unpooling layers, forming a graph U-Net architecture that is analogous to convolutional U-Nets. We demonstrate the predictive capabilities of MAgNET in surrogate modeling for non-linear finite element simulations in the mechanics of solids. |
2211.11238 | Sijie Wang | Sijie Wang, Qiyu Kang, Rui She, Wee Peng Tay, Andreas Hartmannsgruber,
Diego Navarro Navarro | RobustLoc: Robust Camera Pose Regression in Challenging Driving
Environments | Accepted by AAAI 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Camera relocalization has various applications in autonomous driving.
Previous camera pose regression models consider only ideal scenarios where
there is little environmental perturbation. To deal with challenging driving
environments that may have changing seasons, weather, illumination, and the
presence of unstable objects, we propose RobustLoc, which derives its
robustness against perturbations from neural differential equations. Our model
uses a convolutional neural network to extract feature maps from multi-view
images, a robust neural differential equation diffusion block module to diffuse
information interactively, and a branched pose decoder with multi-layer
training to estimate the vehicle poses. Experiments demonstrate that RobustLoc
surpasses current state-of-the-art camera pose regression models and achieves
robust performance in various environments. Our code is released at:
https://github.com/sijieaaa/RobustLoc
| [
{
"created": "Mon, 21 Nov 2022 08:02:39 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Nov 2022 10:32:09 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Apr 2023 12:36:50 GMT",
"version": "v3"
},
{
"created": "Thu, 25 May 2023 12:17:09 GMT",
"version": "v4"
}
] | 2023-05-26 | [
[
"Wang",
"Sijie",
""
],
[
"Kang",
"Qiyu",
""
],
[
"She",
"Rui",
""
],
[
"Tay",
"Wee Peng",
""
],
[
"Hartmannsgruber",
"Andreas",
""
],
[
"Navarro",
"Diego Navarro",
""
]
] | Camera relocalization has various applications in autonomous driving. Previous camera pose regression models consider only ideal scenarios where there is little environmental perturbation. To deal with challenging driving environments that may have changing seasons, weather, illumination, and the presence of unstable objects, we propose RobustLoc, which derives its robustness against perturbations from neural differential equations. Our model uses a convolutional neural network to extract feature maps from multi-view images, a robust neural differential equation diffusion block module to diffuse information interactively, and a branched pose decoder with multi-layer training to estimate the vehicle poses. Experiments demonstrate that RobustLoc surpasses current state-of-the-art camera pose regression models and achieves robust performance in various environments. Our code is released at: https://github.com/sijieaaa/RobustLoc |
2003.08580 | Nikita Samarin | Nikita Samarin, Alisa Frik, Sean Brooks, Coye Cheshire, Serge Egelman | Surveying Vulnerable Populations: A Case Study of Civil Society
Organizations | [v2] Appears in the Workshop on Inclusive Privacy and Security (WIPS)
co-located with Symposium on Usable Privacy and Security (SOUPS) 2020; [v1]
Appears in the Networked Privacy Workshop co-located with ACM Conference on
Human Factors in Computing Systems (CHI) 2020 | null | null | null | cs.CY cs.CR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to organizations in other sectors, civil society organizations
(CSOs) are particularly vulnerable to security and privacy threats, as they
lack adequate resources and expertise to defend themselves. At the same time,
their security needs and practices have not gained much attention among
researchers, and existing solutions designed for the average users do not
consider the contexts in which CSO employees operate. As part of our
preliminary work, we conducted an anonymous online survey with 102 CSO
employees to collect information about their perceived risks of different
security and privacy threats, and their self-reported mitigation strategies.
The design of our preliminary survey accounted for the unique requirements of
our target population by establishing trust with respondents, using
anonymity-preserving incentive strategies, and distributing the survey with the
help of a trusted intermediary. However, by carefully examining our methods and
the feedback received from respondents, we uncovered several issues with our
methodology, including the length of the survey, the framing of the questions,
and the design of the recruitment email. We hope that the discussion presented
in this paper will inform and assist researchers and practitioners working on
understanding and improving the security and privacy of CSOs.
| [
{
"created": "Thu, 19 Mar 2020 05:30:21 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2020 21:01:40 GMT",
"version": "v2"
}
] | 2020-07-15 | [
[
"Samarin",
"Nikita",
""
],
[
"Frik",
"Alisa",
""
],
[
"Brooks",
"Sean",
""
],
[
"Cheshire",
"Coye",
""
],
[
"Egelman",
"Serge",
""
]
] | Compared to organizations in other sectors, civil society organizations (CSOs) are particularly vulnerable to security and privacy threats, as they lack adequate resources and expertise to defend themselves. At the same time, their security needs and practices have not gained much attention among researchers, and existing solutions designed for the average users do not consider the contexts in which CSO employees operate. As part of our preliminary work, we conducted an anonymous online survey with 102 CSO employees to collect information about their perceived risks of different security and privacy threats, and their self-reported mitigation strategies. The design of our preliminary survey accounted for the unique requirements of our target population by establishing trust with respondents, using anonymity-preserving incentive strategies, and distributing the survey with the help of a trusted intermediary. However, by carefully examining our methods and the feedback received from respondents, we uncovered several issues with our methodology, including the length of the survey, the framing of the questions, and the design of the recruitment email. We hope that the discussion presented in this paper will inform and assist researchers and practitioners working on understanding and improving the security and privacy of CSOs. |
2208.06868 | Jaime C\'espedes Sisniega | Jaime C\'espedes-Sisniega and \'Alvaro L\'opez-Garc\'ia | Frouros: A Python library for drift detection in machine learning
systems | 11 pages, 1 table | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Frouros is an open-source Python library capable of detecting drift in
machine learning systems. It provides a combination of classical and more
recent algorithms for drift detection: both concept and data drift. We have
designed it with the objective of making it compatible with any machine
learning framework and easily adaptable to real-world use cases. The library is
developed following a set of best development and continuous integration
practices to ensure ease of maintenance and extensibility. The source code is
available at https://github.com/IFCA/frouros.
| [
{
"created": "Sun, 14 Aug 2022 15:25:41 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Jun 2023 10:50:56 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jul 2023 09:00:57 GMT",
"version": "v3"
},
{
"created": "Sun, 23 Jul 2023 10:36:55 GMT",
"version": "v4"
}
] | 2023-07-25 | [
[
"Céspedes-Sisniega",
"Jaime",
""
],
[
"López-García",
"Álvaro",
""
]
] | Frouros is an open-source Python library capable of detecting drift in machine learning systems. It provides a combination of classical and more recent algorithms for drift detection: both concept and data drift. We have designed it with the objective of making it compatible with any machine learning framework and easily adaptable to real-world use cases. The library is developed following a set of best development and continuous integration practices to ensure ease of maintenance and extensibility. The source code is available at https://github.com/IFCA/frouros. |
2008.10546 | Lingkai Kong | Lingkai Kong, Jimeng Sun and Chao Zhang | SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates | ICML2020. Code is available through
https://github.com/Lingkai-Kong/SDE-Net | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncertainty quantification is a fundamental yet unsolved problem for deep
learning. The Bayesian framework provides a principled way of uncertainty
estimation but is often not scalable to modern deep neural nets (DNNs) that
have a large number of parameters. Non-Bayesian methods are simple to implement
but often conflate different sources of uncertainties and require huge
computing resources. We propose a new method for quantifying uncertainties of
DNNs from a dynamical system perspective. The core of our method is to view DNN
transformations as state evolution of a stochastic dynamical system and
introduce a Brownian motion term for capturing epistemic uncertainty. Based on
this perspective, we propose a neural stochastic differential equation model
(SDE-Net) which consists of (1) a drift net that controls the system to fit the
predictive function; and (2) a diffusion net that captures epistemic
uncertainty. We theoretically analyze the existence and uniqueness of the
solution to SDE-Net. Our experiments demonstrate that the SDE-Net model can
outperform existing uncertainty estimation methods across a series of tasks
where uncertainty plays a fundamental role.
| [
{
"created": "Mon, 24 Aug 2020 16:33:54 GMT",
"version": "v1"
}
] | 2020-08-25 | [
[
"Kong",
"Lingkai",
""
],
[
"Sun",
"Jimeng",
""
],
[
"Zhang",
"Chao",
""
]
] | Uncertainty quantification is a fundamental yet unsolved problem for deep learning. The Bayesian framework provides a principled way of uncertainty estimation but is often not scalable to modern deep neural nets (DNNs) that have a large number of parameters. Non-Bayesian methods are simple to implement but often conflate different sources of uncertainties and require huge computing resources. We propose a new method for quantifying uncertainties of DNNs from a dynamical system perspective. The core of our method is to view DNN transformations as state evolution of a stochastic dynamical system and introduce a Brownian motion term for capturing epistemic uncertainty. Based on this perspective, we propose a neural stochastic differential equation model (SDE-Net) which consists of (1) a drift net that controls the system to fit the predictive function; and (2) a diffusion net that captures epistemic uncertainty. We theoretically analyze the existence and uniqueness of the solution to SDE-Net. Our experiments demonstrate that the SDE-Net model can outperform existing uncertainty estimation methods across a series of tasks where uncertainty plays a fundamental role. |
2209.09401 | Zichun Yu | Zichun Yu, Tianyu Gao, Zhengyan Zhang, Yankai Lin, Zhiyuan Liu,
Maosong Sun and Jie Zhou | Automatic Label Sequence Generation for Prompting Sequence-to-sequence
Models | Accepted to COLING 2022 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prompting, which casts downstream applications as language modeling tasks,
has shown to be sample efficient compared to standard fine-tuning with
pre-trained models. However, one pitfall of prompting is the need of
manually-designed patterns, whose outcome can be unintuitive and requires large
validation sets to tune. To tackle the challenge, we propose AutoSeq, a fully
automatic prompting method: (1) We adopt natural language prompts on
sequence-to-sequence models, enabling free-form generation and larger label
search space; (2) We propose label sequences -- phrases with indefinite lengths
to verbalize the labels -- which eliminate the need of manual templates and are
more expressive than single label words; (3) We use beam search to
automatically generate a large amount of label sequence candidates and propose
contrastive re-ranking to get the best combinations. AutoSeq significantly
outperforms other no-manual-design methods, such as soft prompt tuning, adapter
tuning, and automatic search on single label words; the generated label
sequences are even better than curated manual ones on a variety of tasks. Our
method reveals the potential of sequence-to-sequence models in few-shot
learning and sheds light on a path to generic and automatic prompting. The
source code of this paper can be obtained from
https://github.com/thunlp/Seq2Seq-Prompt.
| [
{
"created": "Tue, 20 Sep 2022 01:35:04 GMT",
"version": "v1"
}
] | 2022-09-21 | [
[
"Yu",
"Zichun",
""
],
[
"Gao",
"Tianyu",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Lin",
"Yankai",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
],
[
"Zhou",
"Jie",
""
]
] | Prompting, which casts downstream applications as language modeling tasks, has shown to be sample efficient compared to standard fine-tuning with pre-trained models. However, one pitfall of prompting is the need of manually-designed patterns, whose outcome can be unintuitive and requires large validation sets to tune. To tackle the challenge, we propose AutoSeq, a fully automatic prompting method: (1) We adopt natural language prompts on sequence-to-sequence models, enabling free-form generation and larger label search space; (2) We propose label sequences -- phrases with indefinite lengths to verbalize the labels -- which eliminate the need of manual templates and are more expressive than single label words; (3) We use beam search to automatically generate a large amount of label sequence candidates and propose contrastive re-ranking to get the best combinations. AutoSeq significantly outperforms other no-manual-design methods, such as soft prompt tuning, adapter tuning, and automatic search on single label words; the generated label sequences are even better than curated manual ones on a variety of tasks. Our method reveals the potential of sequence-to-sequence models in few-shot learning and sheds light on a path to generic and automatic prompting. The source code of this paper can be obtained from https://github.com/thunlp/Seq2Seq-Prompt. |
1508.05545 | Christian Weilbach | Christian Weilbach, Konrad K\"uhne, Annette Bieniusa | Decoupling conflicts for configurable resolution in an open replication
system | 6 pages, 5 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Replikativ is a replication middleware supporting a new kind of confluent
replicated datatype resembling a distributed version control system. It retains
the order of write operations at the trade-off of reduced availability with
after-the- fact conflict resolution. The system allows to develop applications
with distributed state in a similar fashion as native applications with
exclusive local state, while transparently exposing the necessary compromises
in terms of the CAP theorem. In this paper, we give a specification of the
replicated datatype and discuss its usage in the replikativ middleware.
Experiments with the implementation show the feasibility of the concept as a
foundation for replication as a service (RaaS).
| [
{
"created": "Sat, 22 Aug 2015 20:19:53 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jan 2016 08:40:06 GMT",
"version": "v2"
}
] | 2016-01-15 | [
[
"Weilbach",
"Christian",
""
],
[
"Kühne",
"Konrad",
""
],
[
"Bieniusa",
"Annette",
""
]
] | Replikativ is a replication middleware supporting a new kind of confluent replicated datatype resembling a distributed version control system. It retains the order of write operations at the trade-off of reduced availability with after-the- fact conflict resolution. The system allows to develop applications with distributed state in a similar fashion as native applications with exclusive local state, while transparently exposing the necessary compromises in terms of the CAP theorem. In this paper, we give a specification of the replicated datatype and discuss its usage in the replikativ middleware. Experiments with the implementation show the feasibility of the concept as a foundation for replication as a service (RaaS). |
1903.03920 | Pooyan Jamshidi | Pooyan Jamshidi, Javier C\'amara, Bradley Schmerl, Christian
K\"astner, David Garlan | Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation
in Autonomous Robots | 14th International Symposium on Software Engineering for Adaptive and
Self-Managing Systems (SEAMS 2019 ) | null | null | null | cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Modern cyber-physical systems (e.g., robotics systems) are typically composed
of physical and software components, the characteristics of which are likely to
change over time. Assumptions about parts of the system made at design time may
not hold at run time, especially when a system is deployed for long periods
(e.g., over decades). Self-adaptation is designed to find reconfigurations of
systems to handle such run-time inconsistencies. Planners can be used to find
and enact optimal reconfigurations in such an evolving context. However, for
systems that are highly configurable, such planning becomes intractable due to
the size of the adaptation space. To overcome this challenge, in this paper we
explore an approach that (a) uses machine learning to find Pareto-optimal
configurations without needing to explore every configuration and (b) restricts
the search space to such configurations to make planning tractable. We explore
this in the context of robot missions that need to consider task timeliness and
energy consumption. An independent evaluation shows that our approach results
in high-quality adaptation plans in uncertain and adversarial environments.
| [
{
"created": "Sun, 10 Mar 2019 04:24:49 GMT",
"version": "v1"
}
] | 2019-03-12 | [
[
"Jamshidi",
"Pooyan",
""
],
[
"Cámara",
"Javier",
""
],
[
"Schmerl",
"Bradley",
""
],
[
"Kästner",
"Christian",
""
],
[
"Garlan",
"David",
""
]
] | Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments. |
2106.03050 | Jiafei Lyu | Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Xiu Li | Efficient Continuous Control with Double Actors and Regularized Critics | 21 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to obtain good value estimation is one of the key problems in
Reinforcement Learning (RL). Current value estimation methods, such as DDPG and
TD3, suffer from unnecessary over- or underestimation bias. In this paper, we
explore the potential of double actors, which has been neglected for a long
time, for better value function estimation in continuous setting. First, we
uncover and demonstrate the bias alleviation property of double actors by
building double actors upon single critic and double critics to handle
overestimation bias in DDPG and underestimation bias in TD3 respectively. Next,
we interestingly find that double actors help improve the exploration ability
of the agent. Finally, to mitigate the uncertainty of value estimate from
double critics, we further propose to regularize the critic networks under
double actors architecture, which gives rise to Double Actors Regularized
Critics (DARC) algorithm. Extensive experimental results on challenging
continuous control tasks show that DARC significantly outperforms
state-of-the-art methods with higher sample efficiency.
| [
{
"created": "Sun, 6 Jun 2021 07:04:48 GMT",
"version": "v1"
}
] | 2021-06-08 | [
[
"Lyu",
"Jiafei",
""
],
[
"Ma",
"Xiaoteng",
""
],
[
"Yan",
"Jiangpeng",
""
],
[
"Li",
"Xiu",
""
]
] | How to obtain good value estimation is one of the key problems in Reinforcement Learning (RL). Current value estimation methods, such as DDPG and TD3, suffer from unnecessary over- or underestimation bias. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value function estimation in continuous setting. First, we uncover and demonstrate the bias alleviation property of double actors by building double actors upon single critic and double critics to handle overestimation bias in DDPG and underestimation bias in TD3 respectively. Next, we interestingly find that double actors help improve the exploration ability of the agent. Finally, to mitigate the uncertainty of value estimate from double critics, we further propose to regularize the critic networks under double actors architecture, which gives rise to Double Actors Regularized Critics (DARC) algorithm. Extensive experimental results on challenging continuous control tasks show that DARC significantly outperforms state-of-the-art methods with higher sample efficiency. |
2310.15705 | Hitesh Gudwani | Hitesh Gudwani | Learning-based Scheduling for Information Accuracy and Freshness in
Wireless Networks | 21 pages, 5 figures | null | null | null | cs.AI cs.NI | http://creativecommons.org/licenses/by/4.0/ | We consider a system of multiple sources, a single communication channel, and
a single monitoring station. Each source measures a time-varying quantity with
varying levels of accuracy and one of them sends its update to the monitoring
station via the channel. The probability of success of each attempted
communication is a function of the source scheduled for transmitting its
update. Both the probability of correct measurement and the probability of
successful transmission of all the sources are unknown to the scheduler. The
metric of interest is the reward received by the system which depends on the
accuracy of the last update received by the destination and the
Age-of-Information (AoI) of the system. We model our scheduling problem as a
variant of the multi-arm bandit problem with sources as different arms. We
compare the performance of all $4$ standard bandit policies, namely, ETC,
$\epsilon$-greedy, UCB, and TS suitably adjusted to our system model via
simulations. In addition, we provide analytical guarantees of $2$ of these
policies, ETC, and $\epsilon$-greedy. Finally, we characterize the lower bound
on the cumulative regret achievable by any policy.
| [
{
"created": "Tue, 24 Oct 2023 10:31:34 GMT",
"version": "v1"
}
] | 2023-10-25 | [
[
"Gudwani",
"Hitesh",
""
]
] | We consider a system of multiple sources, a single communication channel, and a single monitoring station. Each source measures a time-varying quantity with varying levels of accuracy and one of them sends its update to the monitoring station via the channel. The probability of success of each attempted communication is a function of the source scheduled for transmitting its update. Both the probability of correct measurement and the probability of successful transmission of all the sources are unknown to the scheduler. The metric of interest is the reward received by the system which depends on the accuracy of the last update received by the destination and the Age-of-Information (AoI) of the system. We model our scheduling problem as a variant of the multi-arm bandit problem with sources as different arms. We compare the performance of all $4$ standard bandit policies, namely, ETC, $\epsilon$-greedy, UCB, and TS suitably adjusted to our system model via simulations. In addition, we provide analytical guarantees of $2$ of these policies, ETC, and $\epsilon$-greedy. Finally, we characterize the lower bound on the cumulative regret achievable by any policy. |
2407.15912 | Jingru Yu | Jingru Yu, Yi Yu, Xuhong Wang, Yilun Lin, Manzhi Yang, Yu Qiao,
Fei-Yue Wang | The Shadow of Fraud: The Emerging Danger of AI-powered Social
Engineering and its Possible Cure | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social engineering (SE) attacks remain a significant threat to both
individuals and organizations. The advancement of Artificial Intelligence (AI),
including diffusion models and large language models (LLMs), has potentially
intensified these threats by enabling more personalized and convincing attacks.
This survey paper categorizes SE attack mechanisms, analyzes their evolution,
and explores methods for measuring these threats. It highlights the challenges
in raising awareness about the risks of AI-enhanced SE attacks and offers
insights into developing proactive and adaptable defense strategies.
Additionally, we introduce a categorization of the evolving nature of
AI-powered social engineering attacks into "3E phases": Enlarging, wherein the
magnitude of attacks expands through the leverage of digital media; Enriching,
introducing novel attack vectors and techniques; and Emerging, signifying the
advent of novel threats and methods. Moreover, we emphasize the necessity for a
robust framework to assess the risk of AI-powered SE attacks. By identifying
and addressing gaps in existing research, we aim to guide future studies and
encourage the development of more effective defenses against the growing threat
of AI-powered social engineering.
| [
{
"created": "Mon, 22 Jul 2024 17:37:31 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Yu",
"Jingru",
""
],
[
"Yu",
"Yi",
""
],
[
"Wang",
"Xuhong",
""
],
[
"Lin",
"Yilun",
""
],
[
"Yang",
"Manzhi",
""
],
[
"Qiao",
"Yu",
""
],
[
"Wang",
"Fei-Yue",
""
]
] | Social engineering (SE) attacks remain a significant threat to both individuals and organizations. The advancement of Artificial Intelligence (AI), including diffusion models and large language models (LLMs), has potentially intensified these threats by enabling more personalized and convincing attacks. This survey paper categorizes SE attack mechanisms, analyzes their evolution, and explores methods for measuring these threats. It highlights the challenges in raising awareness about the risks of AI-enhanced SE attacks and offers insights into developing proactive and adaptable defense strategies. Additionally, we introduce a categorization of the evolving nature of AI-powered social engineering attacks into "3E phases": Enlarging, wherein the magnitude of attacks expands through the leverage of digital media; Enriching, introducing novel attack vectors and techniques; and Emerging, signifying the advent of novel threats and methods. Moreover, we emphasize the necessity for a robust framework to assess the risk of AI-powered SE attacks. By identifying and addressing gaps in existing research, we aim to guide future studies and encourage the development of more effective defenses against the growing threat of AI-powered social engineering. |
1807.10564 | Bryan Eikema | Bryan Eikema and Wilker Aziz | Auto-Encoding Variational Neural Machine Translation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a deep generative model of bilingual sentence pairs for machine
translation. The model generates source and target sentences jointly from a
shared latent representation and is parameterised by neural networks. We
perform efficient training using amortised variational inference and
reparameterised gradients. Additionally, we discuss the statistical
implications of joint modelling and propose an efficient approximation to
maximum a posteriori decoding for fast test-time predictions. We demonstrate
the effectiveness of our model in three machine translation scenarios:
in-domain training, mixed-domain training, and learning from a mix of
gold-standard and synthetic data. Our experiments show consistently that our
joint formulation outperforms conditional modelling (i.e. standard neural
machine translation) in all such scenarios.
| [
{
"created": "Fri, 27 Jul 2018 13:03:06 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Aug 2018 07:50:23 GMT",
"version": "v2"
},
{
"created": "Wed, 29 May 2019 09:20:21 GMT",
"version": "v3"
},
{
"created": "Fri, 31 May 2019 14:00:00 GMT",
"version": "v4"
}
] | 2019-06-03 | [
[
"Eikema",
"Bryan",
""
],
[
"Aziz",
"Wilker",
""
]
] | We present a deep generative model of bilingual sentence pairs for machine translation. The model generates source and target sentences jointly from a shared latent representation and is parameterised by neural networks. We perform efficient training using amortised variational inference and reparameterised gradients. Additionally, we discuss the statistical implications of joint modelling and propose an efficient approximation to maximum a posteriori decoding for fast test-time predictions. We demonstrate the effectiveness of our model in three machine translation scenarios: in-domain training, mixed-domain training, and learning from a mix of gold-standard and synthetic data. Our experiments show consistently that our joint formulation outperforms conditional modelling (i.e. standard neural machine translation) in all such scenarios. |
2010.04476 | Dominik Helm | Dominik Helm, Florian K\"ubler, Michael Reif, Michael Eichberg, Mira
Mezini | Modular Collaborative Program Analysis in OPAL | Joint European Software Engineering Conference and Symposium on the
Foundations of Software Engineering, Nov 2020 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches combining multiple static analyses deriving different,
independent properties focus either on modularity or performance. Whereas
declarative approaches facilitate modularity and automated,
analysis-independent optimizations, imperative approaches foster manual,
analysis-specific optimizations.
In this paper, we present a novel approach to static analyses that leverages
the modularity of blackboard systems and combines declarative and imperative
techniques. Our approach allows exchangeability, and pluggable extension of
analyses in order to improve sound(i)ness, precision, and scalability and
explicitly enables the combination of otherwise incompatible analyses. With our
approach integrated in the OPAL framework, we were able to implement various
dissimilar analyses, including a points-to analysis that outperforms an
equivalent analysis from Doop, the state-of-the-art points-to analysis
framework.
| [
{
"created": "Fri, 9 Oct 2020 09:57:53 GMT",
"version": "v1"
}
] | 2020-10-12 | [
[
"Helm",
"Dominik",
""
],
[
"Kübler",
"Florian",
""
],
[
"Reif",
"Michael",
""
],
[
"Eichberg",
"Michael",
""
],
[
"Mezini",
"Mira",
""
]
] | Current approaches combining multiple static analyses deriving different, independent properties focus either on modularity or performance. Whereas declarative approaches facilitate modularity and automated, analysis-independent optimizations, imperative approaches foster manual, analysis-specific optimizations. In this paper, we present a novel approach to static analyses that leverages the modularity of blackboard systems and combines declarative and imperative techniques. Our approach allows exchangeability, and pluggable extension of analyses in order to improve sound(i)ness, precision, and scalability and explicitly enables the combination of otherwise incompatible analyses. With our approach integrated in the OPAL framework, we were able to implement various dissimilar analyses, including a points-to analysis that outperforms an equivalent analysis from Doop, the state-of-the-art points-to analysis framework. |
2407.06797 | Fotios Lygerakis | Fotios Lygerakis, Elmar Rueckert | ED-VAE: Entropy Decomposition of ELBO in Variational Autoencoders | null | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Traditional Variational Autoencoders (VAEs) are constrained by the
limitations of the Evidence Lower Bound (ELBO) formulation, particularly when
utilizing simplistic, non-analytic, or unknown prior distributions. These
limitations inhibit the VAE's ability to generate high-quality samples and
provide clear, interpretable latent representations. This work introduces the
Entropy Decomposed Variational Autoencoder (ED-VAE), a novel re-formulation of
the ELBO that explicitly includes entropy and cross-entropy components. This
reformulation significantly enhances model flexibility, allowing for the
integration of complex and non-standard priors. By providing more detailed
control over the encoding and regularization of latent spaces, ED-VAE not only
improves interpretability but also effectively captures the complex
interactions between latent variables and observed data, thus leading to better
generative performance.
| [
{
"created": "Tue, 9 Jul 2024 12:09:21 GMT",
"version": "v1"
}
] | 2024-07-10 | [
[
"Lygerakis",
"Fotios",
""
],
[
"Rueckert",
"Elmar",
""
]
] | Traditional Variational Autoencoders (VAEs) are constrained by the limitations of the Evidence Lower Bound (ELBO) formulation, particularly when utilizing simplistic, non-analytic, or unknown prior distributions. These limitations inhibit the VAE's ability to generate high-quality samples and provide clear, interpretable latent representations. This work introduces the Entropy Decomposed Variational Autoencoder (ED-VAE), a novel re-formulation of the ELBO that explicitly includes entropy and cross-entropy components. This reformulation significantly enhances model flexibility, allowing for the integration of complex and non-standard priors. By providing more detailed control over the encoding and regularization of latent spaces, ED-VAE not only improves interpretability but also effectively captures the complex interactions between latent variables and observed data, thus leading to better generative performance. |
2204.09992 | Chen Tang | Chen Tang, Haoyu Zhai, Kai Ouyang, Zhi Wang, Yifei Zhu, Wenwu Zhu | Arbitrary Bit-width Network: A Joint Layer-Wise Quantization and
Adaptive Inference Approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional model quantization methods use a fixed quantization scheme to
different data samples, which ignores the inherent "recognition difficulty"
differences between various samples. We propose to feed different data samples
with varying quantization schemes to achieve a data-dependent dynamic
inference, at a fine-grained layer level. However, enabling this adaptive
inference with changeable layer-wise quantization schemes is challenging
because the combination of bit-widths and layers is growing exponentially,
making it extremely difficult to train a single model in such a vast searching
space and use it in practice. To solve this problem, we present the Arbitrary
Bit-width Network (ABN), where the bit-widths of a single deep network can
change at runtime for different data samples, with a layer-wise granularity.
Specifically, first we build a weight-shared layer-wise quantizable
"super-network" in which each layer can be allocated with multiple bit-widths
and thus quantized differently on demand. The super-network provides a
considerably large number of combinations of bit-widths and layers, each of
which can be used during inference without retraining or storing myriad models.
Second, based on the well-trained super-network, each layer's runtime bit-width
selection decision is modeled as a Markov Decision Process (MDP) and solved by
an adaptive inference strategy accordingly. Experiments show that the
super-network can be built without accuracy degradation, and the bit-widths
allocation of each layer can be adjusted to deal with various inputs on the
fly. On ImageNet classification, we achieve 1.1% top1 accuracy improvement
while saving 36.2% BitOps.
| [
{
"created": "Thu, 21 Apr 2022 09:36:43 GMT",
"version": "v1"
}
] | 2022-04-22 | [
[
"Tang",
"Chen",
""
],
[
"Zhai",
"Haoyu",
""
],
[
"Ouyang",
"Kai",
""
],
[
"Wang",
"Zhi",
""
],
[
"Zhu",
"Yifei",
""
],
[
"Zhu",
"Wenwu",
""
]
] | Conventional model quantization methods use a fixed quantization scheme to different data samples, which ignores the inherent "recognition difficulty" differences between various samples. We propose to feed different data samples with varying quantization schemes to achieve a data-dependent dynamic inference, at a fine-grained layer level. However, enabling this adaptive inference with changeable layer-wise quantization schemes is challenging because the combination of bit-widths and layers is growing exponentially, making it extremely difficult to train a single model in such a vast searching space and use it in practice. To solve this problem, we present the Arbitrary Bit-width Network (ABN), where the bit-widths of a single deep network can change at runtime for different data samples, with a layer-wise granularity. Specifically, first we build a weight-shared layer-wise quantizable "super-network" in which each layer can be allocated with multiple bit-widths and thus quantized differently on demand. The super-network provides a considerably large number of combinations of bit-widths and layers, each of which can be used during inference without retraining or storing myriad models. Second, based on the well-trained super-network, each layer's runtime bit-width selection decision is modeled as a Markov Decision Process (MDP) and solved by an adaptive inference strategy accordingly. Experiments show that the super-network can be built without accuracy degradation, and the bit-widths allocation of each layer can be adjusted to deal with various inputs on the fly. On ImageNet classification, we achieve 1.1% top1 accuracy improvement while saving 36.2% BitOps. |
2209.12602 | D\'avid Sztah\'o | D\'avid Sztah\'o and Attila Fejes | Effects of language mismatch in automatic forensic voice comparison
using deep learning embeddings | null | null | 10.1111/1556-4029.15250 | null | cs.SD cs.CL eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In forensic voice comparison the speaker embedding has become widely popular
in the last 10 years. Most of the pretrained speaker embeddings are trained on
English corpora, because it is easily accessible. Thus, language dependency can
be an important factor in automatic forensic voice comparison, especially when
the target language is linguistically very different. There are numerous
commercial systems available, but their models are mainly trained on a
different language (mostly English) than the target language. In the case of a
low-resource language, developing a corpus for forensic purposes containing
enough speakers to train deep learning models is costly. This study aims to
investigate whether a model pre-trained on English corpus can be used on a
target low-resource language (here, Hungarian), different from the model is
trained on. Also, often multiple samples are not available from the offender
(unknown speaker). Therefore, samples are compared pairwise with and without
speaker enrollment for suspect (known) speakers. Two corpora are applied that
were developed especially for forensic purposes, and a third that is meant for
traditional speaker verification. Two deep learning based speaker embedding
vector extraction methods are used: the x-vector and ECAPA-TDNN. Speaker
verification was evaluated in the likelihood-ratio framework. A comparison is
made between the language combinations (modeling, LR calibration, evaluation).
The results were evaluated by minCllr and EER metrics. It was found that the
model pre-trained on a different language but on a corpus with a huge amount of
speakers performs well on samples with language mismatch. The effect of sample
durations and speaking styles were also examined. It was found that the longer
the duration of the sample in question the better the performance is. Also,
there is no real difference if various speaking styles are applied.
| [
{
"created": "Mon, 26 Sep 2022 11:49:37 GMT",
"version": "v1"
}
] | 2023-04-12 | [
[
"Sztahó",
"Dávid",
""
],
[
"Fejes",
"Attila",
""
]
] | In forensic voice comparison the speaker embedding has become widely popular in the last 10 years. Most of the pretrained speaker embeddings are trained on English corpora, because it is easily accessible. Thus, language dependency can be an important factor in automatic forensic voice comparison, especially when the target language is linguistically very different. There are numerous commercial systems available, but their models are mainly trained on a different language (mostly English) than the target language. In the case of a low-resource language, developing a corpus for forensic purposes containing enough speakers to train deep learning models is costly. This study aims to investigate whether a model pre-trained on English corpus can be used on a target low-resource language (here, Hungarian), different from the model is trained on. Also, often multiple samples are not available from the offender (unknown speaker). Therefore, samples are compared pairwise with and without speaker enrollment for suspect (known) speakers. Two corpora are applied that were developed especially for forensic purposes, and a third that is meant for traditional speaker verification. Two deep learning based speaker embedding vector extraction methods are used: the x-vector and ECAPA-TDNN. Speaker verification was evaluated in the likelihood-ratio framework. A comparison is made between the language combinations (modeling, LR calibration, evaluation). The results were evaluated by minCllr and EER metrics. It was found that the model pre-trained on a different language but on a corpus with a huge amount of speakers performs well on samples with language mismatch. The effect of sample durations and speaking styles were also examined. It was found that the longer the duration of the sample in question the better the performance is. Also, there is no real difference if various speaking styles are applied. |
1503.04913 | EPTCS | Nils J\"ahnig (TU Berlin), Thomas G\"othel (TU Berlin), Sabine Glesner
(TU Berlin) | A Denotational Semantics for Communicating Unstructured Code | In Proceedings FESCA 2015, arXiv:1503.04378 | EPTCS 178, 2015, pp. 9-21 | 10.4204/EPTCS.178.2 | null | cs.PL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important property of programming language semantics is that they should
be compositional. However, unstructured low-level code contains goto-like
commands making it hard to define a semantics that is compositional. In this
paper, we follow the ideas of Saabas and Uustalu to structure low-level code.
This gives us the possibility to define a compositional denotational semantics
based on least fixed points to allow for the use of inductive verification
methods. We capture the semantics of communication using finite traces similar
to the denotations of CSP. In addition, we examine properties of this semantics
and give an example that demonstrates reasoning about communication and jumps.
With this semantics, we lay the foundations for a proof calculus that captures
both, the semantics of unstructured low-level code and communication.
| [
{
"created": "Tue, 17 Mar 2015 03:59:45 GMT",
"version": "v1"
}
] | 2015-03-18 | [
[
"Jähnig",
"Nils",
"",
"TU Berlin"
],
[
"Göthel",
"Thomas",
"",
"TU Berlin"
],
[
"Glesner",
"Sabine",
"",
"TU Berlin"
]
] | An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication. |
2304.10190 | Souneil Park | Souneil Park, Pavol Mulinka, Diego Perino | A Large-scale Examination of "Socioeconomic" Fairness in Mobile Networks | null | null | 10.1145/3530190.3534809 | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Internet access is a special resource of which needs has become universal
across the public whereas the service is operated in the private sector. Mobile
Network Operators (MNOs) put efforts for management, planning, and
optimization; however, they do not link such activities to socioeconomic
fairness. In this paper, we make a first step towards understanding the
relation between socioeconomic status of customers and network performance, and
investigate potential discrimination in network deployment and management. The
scope of our study spans various aspects, including urban geography, network
resource deployment, data consumption, and device distribution. A novel
methodology that enables a geo-socioeconomic perspective to mobile network is
developed for the study. The results are based on an actual infrastructure in
multiple cities, covering millions of users densely covering the socioeconomic
scale. We report a thorough examination of the fairness status, its
relationship with various structural factors, and potential class specific
solutions.
| [
{
"created": "Thu, 20 Apr 2023 10:03:51 GMT",
"version": "v1"
}
] | 2023-04-21 | [
[
"Park",
"Souneil",
""
],
[
"Mulinka",
"Pavol",
""
],
[
"Perino",
"Diego",
""
]
] | Internet access is a special resource of which needs has become universal across the public whereas the service is operated in the private sector. Mobile Network Operators (MNOs) put efforts for management, planning, and optimization; however, they do not link such activities to socioeconomic fairness. In this paper, we make a first step towards understanding the relation between socioeconomic status of customers and network performance, and investigate potential discrimination in network deployment and management. The scope of our study spans various aspects, including urban geography, network resource deployment, data consumption, and device distribution. A novel methodology that enables a geo-socioeconomic perspective to mobile network is developed for the study. The results are based on an actual infrastructure in multiple cities, covering millions of users densely covering the socioeconomic scale. We report a thorough examination of the fairness status, its relationship with various structural factors, and potential class specific solutions. |
1805.07830 | Shayegan Omidshafiei | Shayegan Omidshafiei, Dong-Ki Kim, Miao Liu, Gerald Tesauro, Matthew
Riemer, Christopher Amato, Murray Campbell, Jonathan P. How | Learning to Teach in Cooperative Multiagent Reinforcement Learning | null | null | null | null | cs.MA cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collective human knowledge has clearly benefited from the fact that
innovations by individuals are taught to others through communication. Similar
to human social groups, agents in distributed learning systems would likely
benefit from communication to share knowledge and teach skills. The problem of
teaching to improve agent learning has been investigated by prior works, but
these approaches make assumptions that prevent application of teaching to
general multiagent problems, or require domain expertise for problems they can
apply to. This learning to teach problem has inherent complexities related to
measuring long-term impacts of teaching that compound the standard multiagent
coordination challenges. In contrast to existing works, this paper presents the
first general framework and algorithm for intelligent agents to learn to teach
in a multiagent environment. Our algorithm, Learning to Coordinate and Teach
Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative
multiagent reinforcement learning. Each agent in our approach learns both when
and what to advise, then uses the received advice to improve local learning.
Importantly, these roles are not fixed; these agents learn to assume the role
of student and/or teacher at the appropriate moments, requesting and providing
advice in order to improve teamwide performance and learning. Empirical
comparisons against state-of-the-art teaching methods show that our teaching
agents not only learn significantly faster, but also learn to coordinate in
tasks where existing methods fail.
| [
{
"created": "Sun, 20 May 2018 22:23:46 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2018 14:10:38 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Jun 2018 16:21:50 GMT",
"version": "v3"
},
{
"created": "Fri, 31 Aug 2018 18:36:15 GMT",
"version": "v4"
}
] | 2018-09-05 | [
[
"Omidshafiei",
"Shayegan",
""
],
[
"Kim",
"Dong-Ki",
""
],
[
"Liu",
"Miao",
""
],
[
"Tesauro",
"Gerald",
""
],
[
"Riemer",
"Matthew",
""
],
[
"Amato",
"Christopher",
""
],
[
"Campbell",
"Murray",
""
],
[
"How",
"Jonathan P.",
""
]
] | Collective human knowledge has clearly benefited from the fact that innovations by individuals are taught to others through communication. Similar to human social groups, agents in distributed learning systems would likely benefit from communication to share knowledge and teach skills. The problem of teaching to improve agent learning has been investigated by prior works, but these approaches make assumptions that prevent application of teaching to general multiagent problems, or require domain expertise for problems they can apply to. This learning to teach problem has inherent complexities related to measuring long-term impacts of teaching that compound the standard multiagent coordination challenges. In contrast to existing works, this paper presents the first general framework and algorithm for intelligent agents to learn to teach in a multiagent environment. Our algorithm, Learning to Coordinate and Teach Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative multiagent reinforcement learning. Each agent in our approach learns both when and what to advise, then uses the received advice to improve local learning. Importantly, these roles are not fixed; these agents learn to assume the role of student and/or teacher at the appropriate moments, requesting and providing advice in order to improve teamwide performance and learning. Empirical comparisons against state-of-the-art teaching methods show that our teaching agents not only learn significantly faster, but also learn to coordinate in tasks where existing methods fail. |
1405.7711 | David L. Chen | David L. Chen, Joohyun Kim, Raymond J. Mooney | Training a Multilingual Sportscaster: Using Perceptual Context to Learn
Language | null | Journal Of Artificial Intelligence Research, Volume 37, pages
397-435, 2010 | 10.1613/jair.2962 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel framework for learning to interpret and generate language
using only perceptual context as supervision. We demonstrate its capabilities
by developing a system that learns to sportscast simulated robot soccer games
in both English and Korean without any language-specific prior knowledge.
Training employs only ambiguous supervision consisting of a stream of
descriptive textual comments and a sequence of events extracted from the
simulation trace. The system simultaneously establishes correspondences between
individual comments and the events that they describe while building a
translation model that supports both parsing and generation. We also present a
novel algorithm for learning which events are worth describing. Human
evaluations of the generated commentaries indicate they are of reasonable
quality and in some cases even on par with those produced by humans for our
limited domain.
| [
{
"created": "Thu, 16 Jan 2014 04:29:26 GMT",
"version": "v1"
}
] | 2014-06-02 | [
[
"Chen",
"David L.",
""
],
[
"Kim",
"Joohyun",
""
],
[
"Mooney",
"Raymond J.",
""
]
] | We present a novel framework for learning to interpret and generate language using only perceptual context as supervision. We demonstrate its capabilities by developing a system that learns to sportscast simulated robot soccer games in both English and Korean without any language-specific prior knowledge. Training employs only ambiguous supervision consisting of a stream of descriptive textual comments and a sequence of events extracted from the simulation trace. The system simultaneously establishes correspondences between individual comments and the events that they describe while building a translation model that supports both parsing and generation. We also present a novel algorithm for learning which events are worth describing. Human evaluations of the generated commentaries indicate they are of reasonable quality and in some cases even on par with those produced by humans for our limited domain. |
2005.03709 | Takato Otsuzuki | Takato Otsuzuki, Hideaki Hayashi, Yuchen Zheng and Seiichi Uchida | Regularized Pooling | 12 pages, 10 figures, accepted for ICANN 2020 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In convolutional neural networks (CNNs), pooling operations play important
roles such as dimensionality reduction and deformation compensation. In
general, max pooling, which is the most widely used operation for local
pooling, is performed independently for each kernel. However, the deformation
may be spatially smooth over the neighboring kernels. This means that max
pooling is too flexible to compensate for actual deformations. In other words,
its excessive flexibility risks canceling the essential spatial differences
between classes. In this paper, we propose regularized pooling, which enables
the value selection direction in the pooling operation to be spatially smooth
across adjacent kernels so as to compensate only for actual deformations. The
results of experiments on handwritten character images and texture images
showed that regularized pooling not only improves recognition accuracy but also
accelerates the convergence of learning compared with conventional pooling
operations.
| [
{
"created": "Wed, 6 May 2020 09:02:17 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 07:10:34 GMT",
"version": "v2"
}
] | 2020-08-07 | [
[
"Otsuzuki",
"Takato",
""
],
[
"Hayashi",
"Hideaki",
""
],
[
"Zheng",
"Yuchen",
""
],
[
"Uchida",
"Seiichi",
""
]
] | In convolutional neural networks (CNNs), pooling operations play important roles such as dimensionality reduction and deformation compensation. In general, max pooling, which is the most widely used operation for local pooling, is performed independently for each kernel. However, the deformation may be spatially smooth over the neighboring kernels. This means that max pooling is too flexible to compensate for actual deformations. In other words, its excessive flexibility risks canceling the essential spatial differences between classes. In this paper, we propose regularized pooling, which enables the value selection direction in the pooling operation to be spatially smooth across adjacent kernels so as to compensate only for actual deformations. The results of experiments on handwritten character images and texture images showed that regularized pooling not only improves recognition accuracy but also accelerates the convergence of learning compared with conventional pooling operations. |
2312.13633 | Haifeng Huang | Haifeng Huang, Yang Zhao, Zehan Wang, Yan Xia, Zhou Zhao | Multi-Modal Domain Adaptation Across Video Scenes for Temporal Video
Grounding | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Temporal Video Grounding (TVG) aims to localize the temporal boundary of a
specific segment in an untrimmed video based on a given language query. Since
datasets in this domain are often gathered from limited video scenes, models
tend to overfit to scene-specific factors, which leads to suboptimal
performance when encountering new scenes in real-world applications. In a new
scene, the fine-grained annotations are often insufficient due to the expensive
labor cost, while the coarse-grained video-query pairs are easier to obtain.
Thus, to address this issue and enhance model performance on new scenes, we
explore the TVG task in an unsupervised domain adaptation (UDA) setting across
scenes for the first time, where the video-query pairs in the source scene
(domain) are labeled with temporal boundaries, while those in the target scene
are not. Under the UDA setting, we introduce a novel Adversarial Multi-modal
Domain Adaptation (AMDA) method to adaptively adjust the model's scene-related
knowledge by incorporating insights from the target data. Specifically, we
tackle the domain gap by utilizing domain discriminators, which help identify
valuable scene-related features effective across both domains. Concurrently, we
mitigate the semantic gap between different modalities by aligning video-query
pairs with related semantics. Furthermore, we employ a mask-reconstruction
approach to enhance the understanding of temporal semantics within a scene.
Extensive experiments on Charades-STA, ActivityNet Captions, and YouCook2
demonstrate the effectiveness of our proposed method.
| [
{
"created": "Thu, 21 Dec 2023 07:49:27 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Huang",
"Haifeng",
""
],
[
"Zhao",
"Yang",
""
],
[
"Wang",
"Zehan",
""
],
[
"Xia",
"Yan",
""
],
[
"Zhao",
"Zhou",
""
]
] | Temporal Video Grounding (TVG) aims to localize the temporal boundary of a specific segment in an untrimmed video based on a given language query. Since datasets in this domain are often gathered from limited video scenes, models tend to overfit to scene-specific factors, which leads to suboptimal performance when encountering new scenes in real-world applications. In a new scene, the fine-grained annotations are often insufficient due to the expensive labor cost, while the coarse-grained video-query pairs are easier to obtain. Thus, to address this issue and enhance model performance on new scenes, we explore the TVG task in an unsupervised domain adaptation (UDA) setting across scenes for the first time, where the video-query pairs in the source scene (domain) are labeled with temporal boundaries, while those in the target scene are not. Under the UDA setting, we introduce a novel Adversarial Multi-modal Domain Adaptation (AMDA) method to adaptively adjust the model's scene-related knowledge by incorporating insights from the target data. Specifically, we tackle the domain gap by utilizing domain discriminators, which help identify valuable scene-related features effective across both domains. Concurrently, we mitigate the semantic gap between different modalities by aligning video-query pairs with related semantics. Furthermore, we employ a mask-reconstruction approach to enhance the understanding of temporal semantics within a scene. Extensive experiments on Charades-STA, ActivityNet Captions, and YouCook2 demonstrate the effectiveness of our proposed method. |
2106.03270 | Shang-Wen Li | Hongyin Luo, Shuyan Dong, Yung-Sung Chuang, Shang-Wen Li | Meta-learning for downstream aware and agnostic pretraining | Extended abstract | Meta Learning and Its Applications to Natural Language Processing
workshop at ACL 2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neural network pretraining is gaining attention due to its outstanding
performance in natural language processing applications. However, pretraining
usually leverages predefined task sequences to learn general linguistic clues.
The lack of mechanisms in choosing proper tasks during pretraining makes the
learning and knowledge encoding inefficient. We thus propose using
meta-learning to select tasks that provide the most informative learning
signals in each episode of pretraining. With the proposed method, we aim to
achieve better efficiency in computation and memory usage for the pretraining
process and resulting networks while maintaining the performance. In this
preliminary work, we discuss the algorithm of the method and its two variants,
downstream-aware and downstream-agnostic pretraining. Our experiment plan is
also summarized, while empirical results will be shared in our future works.
| [
{
"created": "Sun, 6 Jun 2021 23:08:09 GMT",
"version": "v1"
}
] | 2021-06-08 | [
[
"Luo",
"Hongyin",
""
],
[
"Dong",
"Shuyan",
""
],
[
"Chuang",
"Yung-Sung",
""
],
[
"Li",
"Shang-Wen",
""
]
] | Neural network pretraining is gaining attention due to its outstanding performance in natural language processing applications. However, pretraining usually leverages predefined task sequences to learn general linguistic clues. The lack of mechanisms in choosing proper tasks during pretraining makes the learning and knowledge encoding inefficient. We thus propose using meta-learning to select tasks that provide the most informative learning signals in each episode of pretraining. With the proposed method, we aim to achieve better efficiency in computation and memory usage for the pretraining process and resulting networks while maintaining the performance. In this preliminary work, we discuss the algorithm of the method and its two variants, downstream-aware and downstream-agnostic pretraining. Our experiment plan is also summarized, while empirical results will be shared in our future works. |
1912.01795 | Fanchao Qi | Fanchao Qi, Liang Chang, Maosong Sun, Sicong Ouyang, Zhiyuan Liu | Towards Building a Multilingual Sememe Knowledge Base: Predicting
Sememes for BabelNet Synsets | Accepted by AAAI Conference on Artificial Intelligence 2020 for oral
presentation | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A sememe is defined as the minimum semantic unit of human languages. Sememe
knowledge bases (KBs), which contain words annotated with sememes, have been
successfully applied to many NLP tasks. However, existing sememe KBs are built
on only a few languages, which hinders their widespread utilization. To address
the issue, we propose to build a unified sememe KB for multiple languages based
on BabelNet, a multilingual encyclopedic dictionary. We first build a dataset
serving as the seed of the multilingual sememe KB. It manually annotates
sememes for over $15$ thousand synsets (the entries of BabelNet). Then, we
present a novel task of automatic sememe prediction for synsets, aiming to
expand the seed dataset into a usable KB. We also propose two simple and
effective models, which exploit different information of synsets. Finally, we
conduct quantitative and qualitative analyses to explore important factors and
difficulties in the task. All the source code and data of this work can be
obtained on https://github.com/thunlp/BabelNet-Sememe-Prediction.
| [
{
"created": "Wed, 4 Dec 2019 04:39:32 GMT",
"version": "v1"
}
] | 2019-12-05 | [
[
"Qi",
"Fanchao",
""
],
[
"Chang",
"Liang",
""
],
[
"Sun",
"Maosong",
""
],
[
"Ouyang",
"Sicong",
""
],
[
"Liu",
"Zhiyuan",
""
]
] | A sememe is defined as the minimum semantic unit of human languages. Sememe knowledge bases (KBs), which contain words annotated with sememes, have been successfully applied to many NLP tasks. However, existing sememe KBs are built on only a few languages, which hinders their widespread utilization. To address the issue, we propose to build a unified sememe KB for multiple languages based on BabelNet, a multilingual encyclopedic dictionary. We first build a dataset serving as the seed of the multilingual sememe KB. It manually annotates sememes for over $15$ thousand synsets (the entries of BabelNet). Then, we present a novel task of automatic sememe prediction for synsets, aiming to expand the seed dataset into a usable KB. We also propose two simple and effective models, which exploit different information of synsets. Finally, we conduct quantitative and qualitative analyses to explore important factors and difficulties in the task. All the source code and data of this work can be obtained on https://github.com/thunlp/BabelNet-Sememe-Prediction. |
2112.06539 | Kamil \.Zywanowski | Kamil \.Zywanowski, Adam Banaszczyk, Micha{\l} R. Nowicki, and Jacek
Komorowski | MinkLoc3D-SI: 3D LiDAR place recognition with sparse convolutions,
spherical coordinates, and intensity | null | null | 10.1109/LRA.2021.3136863 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The 3D LiDAR place recognition aims to estimate a coarse localization in a
previously seen environment based on a single scan from a rotating 3D LiDAR
sensor. The existing solutions to this problem include hand-crafted point cloud
descriptors (e.g., ScanContext, M2DP, LiDAR IRIS) and deep learning-based
solutions (e.g., PointNetVLAD, PCAN, LPDNet, DAGC, MinkLoc3D), which are often
only evaluated on accumulated 2D scans from the Oxford RobotCar dataset. We
introduce MinkLoc3D-SI, a sparse convolution-based solution that utilizes
spherical coordinates of 3D points and processes the intensity of 3D LiDAR
measurements, improving the performance when a single 3D LiDAR scan is used.
Our method integrates the improvements typical for hand-crafted descriptors
(like ScanContext) with the most efficient 3D sparse convolutions (MinkLoc3D).
Our experiments show improved results on single scans from 3D LiDARs (USyd
Campus dataset) and great generalization ability (KITTI dataset). Using
intensity information on accumulated 2D scans (RobotCar Intensity dataset)
improves the performance, even though spherical representation doesn't produce
a noticeable improvement. As a result, MinkLoc3D-SI is suited for single scans
obtained from a 3D LiDAR, making it applicable in autonomous vehicles.
| [
{
"created": "Mon, 13 Dec 2021 10:21:34 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Dec 2021 10:38:06 GMT",
"version": "v2"
}
] | 2021-12-28 | [
[
"Żywanowski",
"Kamil",
""
],
[
"Banaszczyk",
"Adam",
""
],
[
"Nowicki",
"Michał R.",
""
],
[
"Komorowski",
"Jacek",
""
]
] | The 3D LiDAR place recognition aims to estimate a coarse localization in a previously seen environment based on a single scan from a rotating 3D LiDAR sensor. The existing solutions to this problem include hand-crafted point cloud descriptors (e.g., ScanContext, M2DP, LiDAR IRIS) and deep learning-based solutions (e.g., PointNetVLAD, PCAN, LPDNet, DAGC, MinkLoc3D), which are often only evaluated on accumulated 2D scans from the Oxford RobotCar dataset. We introduce MinkLoc3D-SI, a sparse convolution-based solution that utilizes spherical coordinates of 3D points and processes the intensity of 3D LiDAR measurements, improving the performance when a single 3D LiDAR scan is used. Our method integrates the improvements typical for hand-crafted descriptors (like ScanContext) with the most efficient 3D sparse convolutions (MinkLoc3D). Our experiments show improved results on single scans from 3D LiDARs (USyd Campus dataset) and great generalization ability (KITTI dataset). Using intensity information on accumulated 2D scans (RobotCar Intensity dataset) improves the performance, even though spherical representation doesn't produce a noticeable improvement. As a result, MinkLoc3D-SI is suited for single scans obtained from a 3D LiDAR, making it applicable in autonomous vehicles. |
1502.07786 | John Clements | John Clements | Generating 56-bit passwords using Markov Models (and Charles Dickens) | 5 pages, 2 figures | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a password generation scheme based on Markov models built from
English text (specifically, Charles Dickens' *A Tale Of Two Cities*). We show a
(linear-running-time) bijection between random bitstrings of any desired length
and generated text, ensuring that all passwords are generated with equal
probability. We observe that the generated passwords appear to strike a
reasonable balance between memorability and security. Using the system, we get
56-bit passwords like 'The cusay is wither?" t', rather than passwords like
'tQ$%Xc4Ef'.
| [
{
"created": "Thu, 26 Feb 2015 23:14:49 GMT",
"version": "v1"
}
] | 2015-03-02 | [
[
"Clements",
"John",
""
]
] | We describe a password generation scheme based on Markov models built from English text (specifically, Charles Dickens' *A Tale Of Two Cities*). We show a (linear-running-time) bijection between random bitstrings of any desired length and generated text, ensuring that all passwords are generated with equal probability. We observe that the generated passwords appear to strike a reasonable balance between memorability and security. Using the system, we get 56-bit passwords like 'The cusay is wither?" t', rather than passwords like 'tQ$%Xc4Ef'. |
2311.12735 | Aunabil Chakma | Aunabil Chakma and Masum Hasan | LowResource at BLP-2023 Task 2: Leveraging BanglaBert for Low Resource
Sentiment Analysis of Bangla Language | Accepted at BLP Workshop @EMNLP2023 | null | null | 75 | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper describes the system of the LowResource Team for Task 2 of
BLP-2023, which involves conducting sentiment analysis on a dataset composed of
public posts and comments from diverse social media platforms. Our primary aim
is to utilize BanglaBert, a BERT model pre-trained on a large Bangla corpus,
using various strategies including fine-tuning, dropping random tokens, and
using several external datasets. Our final model is an ensemble of the three
best BanglaBert variations. Our system has achieved overall 3rd in the Test Set
among 30 participating teams with a score of 0.718. Additionally, we discuss
the promising systems that didn't perform well namely task-adaptive pertaining
and paraphrasing using BanglaT5. Training codes and external datasets which are
used for our system are publicly available at
https://github.com/Aunabil4602/bnlp-workshop-task2-2023
| [
{
"created": "Tue, 21 Nov 2023 17:21:15 GMT",
"version": "v1"
}
] | 2023-11-22 | [
[
"Chakma",
"Aunabil",
""
],
[
"Hasan",
"Masum",
""
]
] | This paper describes the system of the LowResource Team for Task 2 of BLP-2023, which involves conducting sentiment analysis on a dataset composed of public posts and comments from diverse social media platforms. Our primary aim is to utilize BanglaBert, a BERT model pre-trained on a large Bangla corpus, using various strategies including fine-tuning, dropping random tokens, and using several external datasets. Our final model is an ensemble of the three best BanglaBert variations. Our system has achieved overall 3rd in the Test Set among 30 participating teams with a score of 0.718. Additionally, we discuss the promising systems that didn't perform well namely task-adaptive pertaining and paraphrasing using BanglaT5. Training codes and external datasets which are used for our system are publicly available at https://github.com/Aunabil4602/bnlp-workshop-task2-2023 |
2404.03995 | Justus Bogner | Apoorva Nalini Pradeep Kumar, Justus Bogner, Markus Funke, Patricia
Lago | Balancing Progress and Responsibility: A Synthesis of Sustainability
Trade-Offs of AI-Based Systems | Accepted for publication at the 8th International Workshop on Green
and Sustainable Software (GREENS'24), collocated with ICSA'24 | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in artificial intelligence (AI) capabilities have increased
the eagerness of companies to integrate AI into software systems. While AI can
be used to have a positive impact on several dimensions of sustainability, this
is often overshadowed by its potential negative influence. While many studies
have explored sustainability factors in isolation, there is insufficient
holistic coverage of potential sustainability benefits or costs that
practitioners need to consider during decision-making for AI adoption. We
therefore aim to synthesize trade-offs related to sustainability in the context
of integrating AI into software systems. We want to make the sustainability
benefits and costs of integrating AI more transparent and accessible for
practitioners.
The study was conducted in collaboration with a Dutch financial organization.
We first performed a rapid review that led to the inclusion of 151 research
papers. Afterward, we conducted six semi-structured interviews to enrich the
data with industry perspectives. The combined results showcase the potential
sustainability benefits and costs of integrating AI. The labels synthesized
from the review regarding potential sustainability benefits were clustered into
16 themes, with "energy management" being the most frequently mentioned one. 11
themes were identified in the interviews, with the top mentioned theme being
"employee wellbeing". Regarding sustainability costs, the review discovered
seven themes, with "deployment issues" being the most popular one, followed by
"ethics & society". "Environmental issues" was the top theme from the
interviews. Our results provide valuable insights to organizations and
practitioners for understanding the potential sustainability implications of
adopting AI.
| [
{
"created": "Fri, 5 Apr 2024 10:11:08 GMT",
"version": "v1"
}
] | 2024-04-08 | [
[
"Kumar",
"Apoorva Nalini Pradeep",
""
],
[
"Bogner",
"Justus",
""
],
[
"Funke",
"Markus",
""
],
[
"Lago",
"Patricia",
""
]
] | Recent advances in artificial intelligence (AI) capabilities have increased the eagerness of companies to integrate AI into software systems. While AI can be used to have a positive impact on several dimensions of sustainability, this is often overshadowed by its potential negative influence. While many studies have explored sustainability factors in isolation, there is insufficient holistic coverage of potential sustainability benefits or costs that practitioners need to consider during decision-making for AI adoption. We therefore aim to synthesize trade-offs related to sustainability in the context of integrating AI into software systems. We want to make the sustainability benefits and costs of integrating AI more transparent and accessible for practitioners. The study was conducted in collaboration with a Dutch financial organization. We first performed a rapid review that led to the inclusion of 151 research papers. Afterward, we conducted six semi-structured interviews to enrich the data with industry perspectives. The combined results showcase the potential sustainability benefits and costs of integrating AI. The labels synthesized from the review regarding potential sustainability benefits were clustered into 16 themes, with "energy management" being the most frequently mentioned one. 11 themes were identified in the interviews, with the top mentioned theme being "employee wellbeing". Regarding sustainability costs, the review discovered seven themes, with "deployment issues" being the most popular one, followed by "ethics & society". "Environmental issues" was the top theme from the interviews. Our results provide valuable insights to organizations and practitioners for understanding the potential sustainability implications of adopting AI. |
2212.10397 | Lining Zhang | Lining Zhang, Simon Mille, Yufang Hou, Daniel Deutsch, Elizabeth
Clark, Yixin Liu, Saad Mahamood, Sebastian Gehrmann, Miruna Clinciu, Khyathi
Chandu, Jo\~ao Sedoc | Needle in a Haystack: An Analysis of High-Agreement Workers on MTurk for
Summarization | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | To prevent the costly and inefficient use of resources on low-quality
annotations, we want a method for creating a pool of dependable annotators who
can effectively complete difficult tasks, such as evaluating automatic
summarization. Thus, we investigate the recruitment of high-quality Amazon
Mechanical Turk workers via a two-step pipeline. We show that we can
successfully filter out subpar workers before they carry out the evaluations
and obtain high-agreement annotations with similar constraints on resources.
Although our workers demonstrate a strong consensus among themselves and
CloudResearch workers, their alignment with expert judgments on a subset of the
data is not as expected and needs further training in correctness. This paper
still serves as a best practice for the recruitment of qualified annotators in
other challenging annotation tasks.
| [
{
"created": "Tue, 20 Dec 2022 16:25:42 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2022 22:16:45 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Jun 2023 01:45:33 GMT",
"version": "v3"
}
] | 2023-06-16 | [
[
"Zhang",
"Lining",
""
],
[
"Mille",
"Simon",
""
],
[
"Hou",
"Yufang",
""
],
[
"Deutsch",
"Daniel",
""
],
[
"Clark",
"Elizabeth",
""
],
[
"Liu",
"Yixin",
""
],
[
"Mahamood",
"Saad",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Clinciu",
"Miruna",
""
],
[
"Chandu",
"Khyathi",
""
],
[
"Sedoc",
"João",
""
]
] | To prevent the costly and inefficient use of resources on low-quality annotations, we want a method for creating a pool of dependable annotators who can effectively complete difficult tasks, such as evaluating automatic summarization. Thus, we investigate the recruitment of high-quality Amazon Mechanical Turk workers via a two-step pipeline. We show that we can successfully filter out subpar workers before they carry out the evaluations and obtain high-agreement annotations with similar constraints on resources. Although our workers demonstrate a strong consensus among themselves and CloudResearch workers, their alignment with expert judgments on a subset of the data is not as expected and needs further training in correctness. This paper still serves as a best practice for the recruitment of qualified annotators in other challenging annotation tasks. |
1809.00742 | Vincent Vajnovszki | Phan Thuan Do, Thi Thu Huong Tran, Vincent Vajnovszki | Exhaustive generation for permutations avoiding a (colored) regular sets
of patterns | null | null | null | null | cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the fact that the field of pattern avoiding permutations has been
skyrocketing over the last two decades, there are very few exhaustive
generating algorithms for such classes of permutations. In this paper we
introduce the notions of regular and colored regular set of forbidden patterns,
which are particular cases of right-justified sets of forbidden patterns. We
show the (colored) regularity of several sets of forbidden patterns (some of
them involving variable length patterns) and we derive a general framework for
the efficient generation of permutations avoiding them. The obtained generating
algorithms are based on succession functions, a notion which is a byproduct of
the ECO method introduced in the context of enumeration and random generation
of combinatorial objects by Barcucci et al. in 1999, and developed later by
Bacchelli et al. in 2004, for instance. For some classes of permutations
falling under our general framework, the corresponding counting sequences are
classical in combinatorics, such as Pell, Fibonacci, Catalan, Schr\"oder and
binomial transform of Padovan sequence.
| [
{
"created": "Mon, 3 Sep 2018 23:18:15 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Sep 2018 08:32:49 GMT",
"version": "v2"
}
] | 2018-09-18 | [
[
"Do",
"Phan Thuan",
""
],
[
"Tran",
"Thi Thu Huong",
""
],
[
"Vajnovszki",
"Vincent",
""
]
] | Despite the fact that the field of pattern avoiding permutations has been skyrocketing over the last two decades, there are very few exhaustive generating algorithms for such classes of permutations. In this paper we introduce the notions of regular and colored regular set of forbidden patterns, which are particular cases of right-justified sets of forbidden patterns. We show the (colored) regularity of several sets of forbidden patterns (some of them involving variable length patterns) and we derive a general framework for the efficient generation of permutations avoiding them. The obtained generating algorithms are based on succession functions, a notion which is a byproduct of the ECO method introduced in the context of enumeration and random generation of combinatorial objects by Barcucci et al. in 1999, and developed later by Bacchelli et al. in 2004, for instance. For some classes of permutations falling under our general framework, the corresponding counting sequences are classical in combinatorics, such as Pell, Fibonacci, Catalan, Schr\"oder and binomial transform of Padovan sequence. |
1312.1819 | Yannis Moysoglou | Stavros G. Kolliopoulos and Yannis Moysoglou | Exponential lower bounds on the size of approximate formulations in the
natural encoding for Capacitated Facility Location | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The metric capacitated facility location is a well-studied problem for which,
while constant factor approximations are known, no efficient relaxation with
constant integrality gap is known. The question whether there is such a
relaxation is among the most important open problems of approximation
algorithms \cite{ShmoysWbook}.
In this paper we show that, if one is restricted to linear programs that use
the natural encoding for facility location, at least an exponential number of
constraints is needed to achieve a constant gap. Our proof does not assume any
special property of the relaxation such as locality or symmetry.
| [
{
"created": "Fri, 6 Dec 2013 10:06:20 GMT",
"version": "v1"
}
] | 2013-12-09 | [
[
"Kolliopoulos",
"Stavros G.",
""
],
[
"Moysoglou",
"Yannis",
""
]
] | The metric capacitated facility location is a well-studied problem for which, while constant factor approximations are known, no efficient relaxation with constant integrality gap is known. The question whether there is such a relaxation is among the most important open problems of approximation algorithms \cite{ShmoysWbook}. In this paper we show that, if one is restricted to linear programs that use the natural encoding for facility location, at least an exponential number of constraints is needed to achieve a constant gap. Our proof does not assume any special property of the relaxation such as locality or symmetry. |
2009.10365 | Diego Alvarez-Estevez | Diego Alvarez-Estevez and Roselyne M. Rijsman | Inter-database validation of a deep learning approach for automatic
sleep scoring | Original submission manuscript, 19 pages, 1 figure, 6 tables | PLoS ONE (2021) 16(8): e0256111 | 10.1371/journal.pone.0256111 | null | cs.LG cs.PF eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we describe a new deep learning approach for automatic sleep
staging, and carry out its validation by addressing its generalization
capabilities on a wide range of sleep staging databases. Prediction
capabilities are evaluated in the context of independent local and external
generalization scenarios. Effectively, by comparing both procedures it is
possible to better extrapolate the expected performance of the method on the
general reference task of sleep staging, regardless of data from a specific
database. In addition, we examine the suitability of a novel approach based on
the use of an ensemble of individual local models and evaluate its impact on
the resulting inter-database generalization performance. Validation results
show good general performance, as compared to the expected levels of human
expert agreement, as well as state-of-the-art automatic sleep staging
approaches
| [
{
"created": "Tue, 22 Sep 2020 07:46:43 GMT",
"version": "v1"
}
] | 2021-08-18 | [
[
"Alvarez-Estevez",
"Diego",
""
],
[
"Rijsman",
"Roselyne M.",
""
]
] | In this work we describe a new deep learning approach for automatic sleep staging, and carry out its validation by addressing its generalization capabilities on a wide range of sleep staging databases. Prediction capabilities are evaluated in the context of independent local and external generalization scenarios. Effectively, by comparing both procedures it is possible to better extrapolate the expected performance of the method on the general reference task of sleep staging, regardless of data from a specific database. In addition, we examine the suitability of a novel approach based on the use of an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. Validation results show good general performance, as compared to the expected levels of human expert agreement, as well as state-of-the-art automatic sleep staging approaches |
1201.2334 | Jiantao Jiao | Jiantao Jiao, Haim H. Permuter, Lei Zhao, Young-Han Kim and Tsachy
Weissman | Universal Estimation of Directed Information | 23 pages, 10 figures, to appear in IEEE Transactions on Information
Theory | null | 10.1109/TIT.2013.2267934 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Four estimators of the directed information rate between a pair of jointly
stationary ergodic finite-alphabet processes are proposed, based on universal
probability assignments. The first one is a Shannon--McMillan--Breiman type
estimator, similar to those used by Verd\'u (2005) and Cai, Kulkarni, and
Verd\'u (2006) for estimation of other information measures. We show the almost
sure and $L_1$ convergence properties of the estimator for any underlying
universal probability assignment. The other three estimators map universal
probability assignments to different functionals, each exhibiting relative
merits such as smoothness, nonnegativity, and boundedness. We establish the
consistency of these estimators in almost sure and $L_1$ senses, and derive
near-optimal rates of convergence in the minimax sense under mild conditions.
These estimators carry over directly to estimating other information measures
of stationary ergodic finite-alphabet processes, such as entropy rate and
mutual information rate, with near-optimal performance and provide alternatives
to classical approaches in the existing literature. Guided by these theoretical
results, the proposed estimators are implemented using the context-tree
weighting algorithm as the universal probability assignment. Experiments on
synthetic and real data are presented, demonstrating the potential of the
proposed schemes in practice and the utility of directed information estimation
in detecting and measuring causal influence and delay.
| [
{
"created": "Wed, 11 Jan 2012 15:49:51 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Oct 2012 04:32:14 GMT",
"version": "v2"
},
{
"created": "Fri, 17 May 2013 04:41:19 GMT",
"version": "v3"
},
{
"created": "Thu, 30 May 2013 22:25:14 GMT",
"version": "v4"
}
] | 2016-11-15 | [
[
"Jiao",
"Jiantao",
""
],
[
"Permuter",
"Haim H.",
""
],
[
"Zhao",
"Lei",
""
],
[
"Kim",
"Young-Han",
""
],
[
"Weissman",
"Tsachy",
""
]
] | Four estimators of the directed information rate between a pair of jointly stationary ergodic finite-alphabet processes are proposed, based on universal probability assignments. The first one is a Shannon--McMillan--Breiman type estimator, similar to those used by Verd\'u (2005) and Cai, Kulkarni, and Verd\'u (2006) for estimation of other information measures. We show the almost sure and $L_1$ convergence properties of the estimator for any underlying universal probability assignment. The other three estimators map universal probability assignments to different functionals, each exhibiting relative merits such as smoothness, nonnegativity, and boundedness. We establish the consistency of these estimators in almost sure and $L_1$ senses, and derive near-optimal rates of convergence in the minimax sense under mild conditions. These estimators carry over directly to estimating other information measures of stationary ergodic finite-alphabet processes, such as entropy rate and mutual information rate, with near-optimal performance and provide alternatives to classical approaches in the existing literature. Guided by these theoretical results, the proposed estimators are implemented using the context-tree weighting algorithm as the universal probability assignment. Experiments on synthetic and real data are presented, demonstrating the potential of the proposed schemes in practice and the utility of directed information estimation in detecting and measuring causal influence and delay. |
2006.05624 | Utkarsh Nath | Utkarsh Nath, Shrinu Kushagra, Yingzhen Yang | Adjoined Networks: A Training Paradigm with Applications to Network
Compression | Published at AAAI 2022 Spring Symposium on Machine Learning and
Knowledge Engineering for Hybrid Intelligence Code available at:
https://github.com/utkarshnath/Adjoint-Network.git | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compressing deep neural networks while maintaining accuracy is important when
we want to deploy large, powerful models in production and/or edge devices. One
common technique used to achieve this goal is knowledge distillation.
Typically, the output of a static pre-defined teacher (a large base network) is
used as soft labels to train and transfer information to a student (or smaller)
network. In this paper, we introduce Adjoined Networks, or AN, a learning
paradigm that trains both the original base network and the smaller compressed
network together. In our training approach, the parameters of the smaller
network are shared across both the base and the compressed networks. Using our
training paradigm, we can simultaneously compress (the student network) and
regularize (the teacher network) any architecture. In this paper, we focus on
popular CNN-based architectures used for computer vision tasks. We conduct an
extensive experimental evaluation of our training paradigm on various
large-scale datasets. Using ResNet-50 as the base network, AN achieves 71.8%
top-1 accuracy with only 1.8M parameters and 1.6 GFLOPs on the ImageNet
data-set. We further propose Differentiable Adjoined Networks (DAN), a training
paradigm that augments AN by using neural architecture search to jointly learn
both the width and the weights for each layer of the smaller network. DAN
achieves ResNet-50 level accuracy on ImageNet with $3.8\times$ fewer parameters
and $2.2\times$ fewer FLOPs.
| [
{
"created": "Wed, 10 Jun 2020 02:48:16 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Oct 2020 02:04:58 GMT",
"version": "v2"
},
{
"created": "Sat, 10 Oct 2020 00:23:48 GMT",
"version": "v3"
},
{
"created": "Wed, 6 Oct 2021 04:35:23 GMT",
"version": "v4"
},
{
"created": "Fri, 15 Apr 2022 00:15:28 GMT",
"version": "v5"
}
] | 2022-04-18 | [
[
"Nath",
"Utkarsh",
""
],
[
"Kushagra",
"Shrinu",
""
],
[
"Yang",
"Yingzhen",
""
]
] | Compressing deep neural networks while maintaining accuracy is important when we want to deploy large, powerful models in production and/or edge devices. One common technique used to achieve this goal is knowledge distillation. Typically, the output of a static pre-defined teacher (a large base network) is used as soft labels to train and transfer information to a student (or smaller) network. In this paper, we introduce Adjoined Networks, or AN, a learning paradigm that trains both the original base network and the smaller compressed network together. In our training approach, the parameters of the smaller network are shared across both the base and the compressed networks. Using our training paradigm, we can simultaneously compress (the student network) and regularize (the teacher network) any architecture. In this paper, we focus on popular CNN-based architectures used for computer vision tasks. We conduct an extensive experimental evaluation of our training paradigm on various large-scale datasets. Using ResNet-50 as the base network, AN achieves 71.8% top-1 accuracy with only 1.8M parameters and 1.6 GFLOPs on the ImageNet data-set. We further propose Differentiable Adjoined Networks (DAN), a training paradigm that augments AN by using neural architecture search to jointly learn both the width and the weights for each layer of the smaller network. DAN achieves ResNet-50 level accuracy on ImageNet with $3.8\times$ fewer parameters and $2.2\times$ fewer FLOPs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.