id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2202.06393
|
Christina Pan
|
Christina A. Pan, Sahil Yakhmi, Tara P. Iyer, Evan Strasnick, Amy X.
Zhang, and Michael S. Bernstein
|
Comparing the Perceived Legitimacy of Content Moderation Processes:
Contractors, Algorithms, Expert Panels, and Digital Juries
|
This paper will appear at CSCW 2022
| null |
10.1145/3512929
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
While research continues to investigate and improve the accuracy, fairness,
and normative appropriateness of content moderation processes on large social
media platforms, even the best process cannot be effective if users reject its
authority as illegitimate. We present a survey experiment comparing the
perceived institutional legitimacy of four popular content moderation
processes. We conducted a within-subjects experiment in which we showed US
Facebook users moderation decisions and randomized the description of whether
those decisions were made by paid contractors, algorithms, expert panels, or
juries of users. Prior work suggests that juries will have the highest
perceived legitimacy due to the benefits of judicial independence and
democratic representation. However, expert panels had greater perceived
legitimacy than algorithms or juries. Moreover, outcome alignment - agreement
with the decision - played a larger role than process in determining perceived
legitimacy. These results suggest benefits to incorporating expert oversight in
content moderation and underscore that any process will face legitimacy
challenges derived from disagreement about outcomes.
|
[
{
"created": "Sun, 13 Feb 2022 19:32:49 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Oct 2022 05:09:21 GMT",
"version": "v2"
}
] |
2022-10-07
|
[
[
"Pan",
"Christina A.",
""
],
[
"Yakhmi",
"Sahil",
""
],
[
"Iyer",
"Tara P.",
""
],
[
"Strasnick",
"Evan",
""
],
[
"Zhang",
"Amy X.",
""
],
[
"Bernstein",
"Michael S.",
""
]
] |
While research continues to investigate and improve the accuracy, fairness, and normative appropriateness of content moderation processes on large social media platforms, even the best process cannot be effective if users reject its authority as illegitimate. We present a survey experiment comparing the perceived institutional legitimacy of four popular content moderation processes. We conducted a within-subjects experiment in which we showed US Facebook users moderation decisions and randomized the description of whether those decisions were made by paid contractors, algorithms, expert panels, or juries of users. Prior work suggests that juries will have the highest perceived legitimacy due to the benefits of judicial independence and democratic representation. However, expert panels had greater perceived legitimacy than algorithms or juries. Moreover, outcome alignment - agreement with the decision - played a larger role than process in determining perceived legitimacy. These results suggest benefits to incorporating expert oversight in content moderation and underscore that any process will face legitimacy challenges derived from disagreement about outcomes.
|
1502.05256
|
Peter Gloor
|
Peter Gloor, Patrick De Boer, Wei Lo, Stefan Wagner, Keiichi Nemoto,
and Hauke Fuehres
|
Cultural Anthropology Through the Lens of Wikipedia - A Comparison of
Historical Leadership Networks in the English, Chinese, Japanese and German
Wikipedia
|
Proceedings of the 5th International Conference on Collaborative
Innovation Networks COINs15, Tokyo, Japan March 12-14, 2015
(arXiv:1502.01142)
| null | null |
coins15/2015/04
|
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the differences in historical worldview between
Western and Eastern cultures, represented through the English, Chinese,
Japanese, and German Wikipedia. In particular, we analyze the historical
networks of the World's leaders since the beginning of written history,
comparing them in the four different Wikipedias.
|
[
{
"created": "Wed, 18 Feb 2015 14:45:18 GMT",
"version": "v1"
}
] |
2015-02-19
|
[
[
"Gloor",
"Peter",
""
],
[
"De Boer",
"Patrick",
""
],
[
"Lo",
"Wei",
""
],
[
"Wagner",
"Stefan",
""
],
[
"Nemoto",
"Keiichi",
""
],
[
"Fuehres",
"Hauke",
""
]
] |
In this paper we study the differences in historical worldview between Western and Eastern cultures, represented through the English, Chinese, Japanese, and German Wikipedia. In particular, we analyze the historical networks of the World's leaders since the beginning of written history, comparing them in the four different Wikipedias.
|
2308.02213
|
Tianhao Qi
|
Tianhao Qi, Hongtao Xie, Pandeng Li, Jiannan Ge, Yongdong Zhang
|
Balanced Classification: A Unified Framework for Long-Tailed Object
Detection
|
Accepted by IEEE Transactions on Multimedia, to be published; Code:
https://github.com/Tianhao-Qi/BACL
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional detectors suffer from performance degradation when dealing with
long-tailed data due to a classification bias towards the majority head
categories. In this paper, we contend that the learning bias originates from
two factors: 1) the unequal competition arising from the imbalanced
distribution of foreground categories, and 2) the lack of sample diversity in
tail categories. To tackle these issues, we introduce a unified framework
called BAlanced CLassification (BACL), which enables adaptive rectification of
inequalities caused by disparities in category distribution and dynamic
intensification of sample diversities in a synchronized manner. Specifically, a
novel foreground classification balance loss (FCBL) is developed to ameliorate
the domination of head categories and shift attention to
difficult-to-differentiate categories by introducing pairwise class-aware
margins and auto-adjusted weight terms, respectively. This loss prevents the
over-suppression of tail categories in the context of unequal competition.
Moreover, we propose a dynamic feature hallucination module (FHM), which
enhances the representation of tail categories in the feature space by
synthesizing hallucinated samples to introduce additional data variances. In
this divide-and-conquer approach, BACL sets a new state-of-the-art on the
challenging LVIS benchmark with a decoupled training pipeline, surpassing
vanilla Faster R-CNN with ResNet-50-FPN by 5.8% AP and 16.1% AP for overall and
tail categories. Extensive experiments demonstrate that BACL consistently
achieves performance improvements across various datasets with different
backbones and architectures. Code and models are available at
https://github.com/Tianhao-Qi/BACL.
|
[
{
"created": "Fri, 4 Aug 2023 09:11:07 GMT",
"version": "v1"
}
] |
2023-08-07
|
[
[
"Qi",
"Tianhao",
""
],
[
"Xie",
"Hongtao",
""
],
[
"Li",
"Pandeng",
""
],
[
"Ge",
"Jiannan",
""
],
[
"Zhang",
"Yongdong",
""
]
] |
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories. In this paper, we contend that the learning bias originates from two factors: 1) the unequal competition arising from the imbalanced distribution of foreground categories, and 2) the lack of sample diversity in tail categories. To tackle these issues, we introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution and dynamic intensification of sample diversities in a synchronized manner. Specifically, a novel foreground classification balance loss (FCBL) is developed to ameliorate the domination of head categories and shift attention to difficult-to-differentiate categories by introducing pairwise class-aware margins and auto-adjusted weight terms, respectively. This loss prevents the over-suppression of tail categories in the context of unequal competition. Moreover, we propose a dynamic feature hallucination module (FHM), which enhances the representation of tail categories in the feature space by synthesizing hallucinated samples to introduce additional data variances. In this divide-and-conquer approach, BACL sets a new state-of-the-art on the challenging LVIS benchmark with a decoupled training pipeline, surpassing vanilla Faster R-CNN with ResNet-50-FPN by 5.8% AP and 16.1% AP for overall and tail categories. Extensive experiments demonstrate that BACL consistently achieves performance improvements across various datasets with different backbones and architectures. Code and models are available at https://github.com/Tianhao-Qi/BACL.
|
1810.02684
|
Vahid Moosavi
|
Joao P. Leitao, Mohamed Zaghloul and Vahid Moosavi
|
Modeling overland flow from local inflows in almost no-time, using Self
Organizing Maps
| null | null | null | null |
cs.CY cs.CC cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physically-based overland flow models are computationally demanding,
hindering their use for real-time applications. Therefore, the development of
fast (and reasonably accurate) overland flow models is needed if they are to be
used to support flood mitigation decision making. In this study, we investigate
the potential of Self-Organizing Maps to rapidly generate water depth and flood
extent results. To conduct the study, we developed a flood-simulation specific
SOM, using cellular automata flood model results and a synthetic DEM and inflow
hydrograph. The preliminary results showed that water depth and flood extent
results produced by the SOM are reasonably accurate and obtained in a very
short period of time. Based on this, it seems that SOMs have the potential to
provide critical flood information to support real-time flood mitigation
decisions. The findings presented would however require further investigations
to obtain general conclusions; these further investigations may include the
consideration of real terrain representations, real water supply networks and
realistic inflows from pipe bursts.
|
[
{
"created": "Sun, 23 Sep 2018 18:54:29 GMT",
"version": "v1"
}
] |
2018-10-08
|
[
[
"Leitao",
"Joao P.",
""
],
[
"Zaghloul",
"Mohamed",
""
],
[
"Moosavi",
"Vahid",
""
]
] |
Physically-based overland flow models are computationally demanding, hindering their use for real-time applications. Therefore, the development of fast (and reasonably accurate) overland flow models is needed if they are to be used to support flood mitigation decision making. In this study, we investigate the potential of Self-Organizing Maps to rapidly generate water depth and flood extent results. To conduct the study, we developed a flood-simulation specific SOM, using cellular automata flood model results and a synthetic DEM and inflow hydrograph. The preliminary results showed that water depth and flood extent results produced by the SOM are reasonably accurate and obtained in a very short period of time. Based on this, it seems that SOMs have the potential to provide critical flood information to support real-time flood mitigation decisions. The findings presented would however require further investigations to obtain general conclusions; these further investigations may include the consideration of real terrain representations, real water supply networks and realistic inflows from pipe bursts.
|
2110.00199
|
Ching-Hsun Tseng
|
Ching-Hsun. Tseng, Liu-Hsueh. Cheng, Shin-Jye. Lee, Xiaojun Zeng
|
Perturbated Gradients Updating within Unit Space for Deep Learning
| null | null |
10.1109/IJCNN55064.2022.9892245
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In deep learning, optimization plays a vital role. By focusing on image
classification, this work investigates the pros and cons of the widely used
optimizers, and proposes a new optimizer: Perturbated Unit Gradient Descent
(PUGD) algorithm with extending normalized gradient operation in tensor within
perturbation to update in unit space. Via a set of experiments and analyses, we
show that PUGD is locally bounded updating, which means the updating from time
to time is controlled. On the other hand, PUGD can push models to a flat
minimum, where the error remains approximately constant, not only because of
the nature of avoiding stationary points in gradient normalization but also by
scanning sharpness in the unit ball. From a series of rigorous experiments,
PUGD helps models to gain a state-of-the-art Top-1 accuracy in Tiny ImageNet
and competitive performances in CIFAR- {10, 100}. We open-source our code at
link: https://github.com/hanktseng131415go/PUGD.
|
[
{
"created": "Fri, 1 Oct 2021 04:00:51 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jan 2022 18:25:25 GMT",
"version": "v2"
}
] |
2022-11-10
|
[
[
"Tseng",
"Ching-Hsun.",
""
],
[
"Cheng",
"Liu-Hsueh.",
""
],
[
"Lee",
"Shin-Jye.",
""
],
[
"Zeng",
"Xiaojun",
""
]
] |
In deep learning, optimization plays a vital role. By focusing on image classification, this work investigates the pros and cons of the widely used optimizers, and proposes a new optimizer: Perturbated Unit Gradient Descent (PUGD) algorithm with extending normalized gradient operation in tensor within perturbation to update in unit space. Via a set of experiments and analyses, we show that PUGD is locally bounded updating, which means the updating from time to time is controlled. On the other hand, PUGD can push models to a flat minimum, where the error remains approximately constant, not only because of the nature of avoiding stationary points in gradient normalization but also by scanning sharpness in the unit ball. From a series of rigorous experiments, PUGD helps models to gain a state-of-the-art Top-1 accuracy in Tiny ImageNet and competitive performances in CIFAR- {10, 100}. We open-source our code at link: https://github.com/hanktseng131415go/PUGD.
|
1303.5705
|
Jaume Agust\'i-Cullell
|
Jaume Agust\'i-Cullell, Francesc Esteva, Pere Garcia, Lluis Godo,
Carles Sierra
|
Combining Multiple-Valued Logics in Modular Expert Systems
|
Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991)
| null | null |
UAI-P-1991-PG-17-25
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The way experts manage uncertainty usually changes depending on the task they
are performing. This fact has lead us to consider the problem of communicating
modules (task implementations) in a large and structured knowledge based system
when modules have different uncertainty calculi. In this paper, the analysis of
the communication problem is made assuming that (i) each uncertainty calculus
is an inference mechanism defining an entailment relation, and therefore the
communication is considered to be inference-preserving, and (ii) we restrict
ourselves to the case which the different uncertainty calculi are given by a
class of truth functional Multiple-valued Logics.
|
[
{
"created": "Wed, 20 Mar 2013 15:29:35 GMT",
"version": "v1"
}
] |
2013-03-26
|
[
[
"Agustí-Cullell",
"Jaume",
""
],
[
"Esteva",
"Francesc",
""
],
[
"Garcia",
"Pere",
""
],
[
"Godo",
"Lluis",
""
],
[
"Sierra",
"Carles",
""
]
] |
The way experts manage uncertainty usually changes depending on the task they are performing. This fact has lead us to consider the problem of communicating modules (task implementations) in a large and structured knowledge based system when modules have different uncertainty calculi. In this paper, the analysis of the communication problem is made assuming that (i) each uncertainty calculus is an inference mechanism defining an entailment relation, and therefore the communication is considered to be inference-preserving, and (ii) we restrict ourselves to the case which the different uncertainty calculi are given by a class of truth functional Multiple-valued Logics.
|
1711.05938
|
Yang Zhang
|
Zehui Xiong, Yang Zhang, Dusit Niyato, Ping Wang and Zhu Han
|
When Mobile Blockchain Meets Edge Computing
|
Accepted by IEEE Communications Magazine
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain, as the backbone technology of the current popular Bitcoin digital
currency, has become a promising decentralized data management framework.
Although blockchain has been widely adopted in many applications, e.g.,
finance, healthcare, and logistics, its application in mobile services is still
limited. This is due to the fact that blockchain users need to solve preset
proof-of-work puzzles to add new data, i.e., a block, to the blockchain.
Solving the proof-of-work, however, consumes substantial resources in terms of
CPU time and energy, which is not suitable for resource-limited mobile devices.
To facilitate blockchain applications in future mobile Internet of Things
systems, multiple access mobile edge computing appears to be an auspicious
solution to solve the proof-of-work puzzles for mobile users. We first
introduce a novel concept of edge computing for mobile blockchain. Then, we
introduce an economic approach for edge computing resource management.
Moreover, a prototype of mobile edge computing enabled blockchain systems is
presented with experimental results to justify the proposed concept.
|
[
{
"created": "Thu, 16 Nov 2017 05:53:57 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Apr 2018 23:14:28 GMT",
"version": "v2"
}
] |
2018-04-13
|
[
[
"Xiong",
"Zehui",
""
],
[
"Zhang",
"Yang",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Wang",
"Ping",
""
],
[
"Han",
"Zhu",
""
]
] |
Blockchain, as the backbone technology of the current popular Bitcoin digital currency, has become a promising decentralized data management framework. Although blockchain has been widely adopted in many applications, e.g., finance, healthcare, and logistics, its application in mobile services is still limited. This is due to the fact that blockchain users need to solve preset proof-of-work puzzles to add new data, i.e., a block, to the blockchain. Solving the proof-of-work, however, consumes substantial resources in terms of CPU time and energy, which is not suitable for resource-limited mobile devices. To facilitate blockchain applications in future mobile Internet of Things systems, multiple access mobile edge computing appears to be an auspicious solution to solve the proof-of-work puzzles for mobile users. We first introduce a novel concept of edge computing for mobile blockchain. Then, we introduce an economic approach for edge computing resource management. Moreover, a prototype of mobile edge computing enabled blockchain systems is presented with experimental results to justify the proposed concept.
|
1806.09566
|
Arnaud Dethise
|
Arnaud Dethise, Marco Chiesa, Marco Canini
|
Prelude: Ensuring Inter-Domain Loop-Freedom in~SDN-Enabled Networks
| null | null |
10.1145/3232565.3232570
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software-Defined-eXchanges (SDXes) promise to tackle the timely quest of
bringing improving the inter-domain routing ecosystem through SDN deployment.
Yet, the naive deployment of SDN on the Internet raises concerns about the
correctness of the inter-domain data-plane. By allowing operators to deflect
traffic from the default BGP route, SDN policies are susceptible of creating
permanent forwarding loops invisible to the control-plane.
In this paper, we propose a system, called Prelude, for detecting SDN-induced
forwarding loops between SDXes with high accuracy without leaking the private
routing information of network operators. To achieve this, we leverage Secure
Multi-Party Computation (SMPC) techniques to build a novel and general
privacy-preserving primitive that detects whether any subset of SDN rules might
affect the same portion of traffic without learning anything about those rules.
We then leverage that primitive as the main building block of a distributed
system tailored to detect forwarding loops among any set of SDXes. We leverage
the particular nature of SDXes to further improve the efficiency of our SMPC
solution.
The number of valid SDN rules, i.e., not creating loops, rejected by our
solution is 100x lower than previous privacy-preserving solutions, and also
provides better privacy guarantees. Furthermore, our solution naturally
provides network operators with some hindsight on the cost of the deflected
paths.
|
[
{
"created": "Mon, 25 Jun 2018 17:06:10 GMT",
"version": "v1"
}
] |
2018-06-26
|
[
[
"Dethise",
"Arnaud",
""
],
[
"Chiesa",
"Marco",
""
],
[
"Canini",
"Marco",
""
]
] |
Software-Defined-eXchanges (SDXes) promise to tackle the timely quest of bringing improving the inter-domain routing ecosystem through SDN deployment. Yet, the naive deployment of SDN on the Internet raises concerns about the correctness of the inter-domain data-plane. By allowing operators to deflect traffic from the default BGP route, SDN policies are susceptible of creating permanent forwarding loops invisible to the control-plane. In this paper, we propose a system, called Prelude, for detecting SDN-induced forwarding loops between SDXes with high accuracy without leaking the private routing information of network operators. To achieve this, we leverage Secure Multi-Party Computation (SMPC) techniques to build a novel and general privacy-preserving primitive that detects whether any subset of SDN rules might affect the same portion of traffic without learning anything about those rules. We then leverage that primitive as the main building block of a distributed system tailored to detect forwarding loops among any set of SDXes. We leverage the particular nature of SDXes to further improve the efficiency of our SMPC solution. The number of valid SDN rules, i.e., not creating loops, rejected by our solution is 100x lower than previous privacy-preserving solutions, and also provides better privacy guarantees. Furthermore, our solution naturally provides network operators with some hindsight on the cost of the deflected paths.
|
2406.00021
|
Arnav Goel
|
Medha Hira, Arnav Goel, Anubha Gupta
|
CrossVoice: Crosslingual Prosody Preserving Cascade-S2ST using Transfer
Learning
|
8 pages, Accepted at ICLR 2024 - Tiny Track
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents CrossVoice, a novel cascade-based Speech-to-Speech
Translation (S2ST) system employing advanced ASR, MT, and TTS technologies with
cross-lingual prosody preservation through transfer learning. We conducted
comprehensive experiments comparing CrossVoice with direct-S2ST systems,
showing improved BLEU scores on tasks such as Fisher Es-En, VoxPopuli Fr-En and
prosody preservation on benchmark datasets CVSS-T and IndicTTS. With an average
mean opinion score of 3.75 out of 4, speech synthesized by CrossVoice closely
rivals human speech on the benchmark, highlighting the efficacy of
cascade-based systems and transfer learning in multilingual S2ST with prosody
transfer.
|
[
{
"created": "Thu, 23 May 2024 20:30:54 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 05:26:48 GMT",
"version": "v2"
}
] |
2024-06-19
|
[
[
"Hira",
"Medha",
""
],
[
"Goel",
"Arnav",
""
],
[
"Gupta",
"Anubha",
""
]
] |
This paper presents CrossVoice, a novel cascade-based Speech-to-Speech Translation (S2ST) system employing advanced ASR, MT, and TTS technologies with cross-lingual prosody preservation through transfer learning. We conducted comprehensive experiments comparing CrossVoice with direct-S2ST systems, showing improved BLEU scores on tasks such as Fisher Es-En, VoxPopuli Fr-En and prosody preservation on benchmark datasets CVSS-T and IndicTTS. With an average mean opinion score of 3.75 out of 4, speech synthesized by CrossVoice closely rivals human speech on the benchmark, highlighting the efficacy of cascade-based systems and transfer learning in multilingual S2ST with prosody transfer.
|
2112.13982
|
KitIan Kou
|
Juan Han, Kit Ian Kou, Jifei Miao
|
Quaternion-based dynamic mode decomposition for background modeling in
color videos
|
16 pages
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene Background Initialization (SBI) is one of the challenging problems in
computer vision. Dynamic mode decomposition (DMD) is a recently proposed method
to robustly decompose a video sequence into the background model and the
corresponding foreground part. However, this method needs to convert the color
image into the grayscale image for processing, which leads to the neglect of
the coupling information between the three channels of the color image. In this
study, we propose a quaternion-based DMD (Q-DMD), which extends the DMD by
quaternion matrix analysis, so as to completely preserve the inherent color
structure of the color image and the color video. We exploit the standard
eigenvalues of the quaternion matrix to compute its spectral decomposition and
calculate the corresponding Q-DMD modes and eigenvalues. The results on the
publicly available benchmark datasets prove that our Q-DMD outperforms the
exact DMD method, and experiment results also demonstrate that the performance
of our approach is comparable to that of the state-of-the-art ones.
|
[
{
"created": "Tue, 28 Dec 2021 03:35:39 GMT",
"version": "v1"
}
] |
2021-12-30
|
[
[
"Han",
"Juan",
""
],
[
"Kou",
"Kit Ian",
""
],
[
"Miao",
"Jifei",
""
]
] |
Scene Background Initialization (SBI) is one of the challenging problems in computer vision. Dynamic mode decomposition (DMD) is a recently proposed method to robustly decompose a video sequence into the background model and the corresponding foreground part. However, this method needs to convert the color image into the grayscale image for processing, which leads to the neglect of the coupling information between the three channels of the color image. In this study, we propose a quaternion-based DMD (Q-DMD), which extends the DMD by quaternion matrix analysis, so as to completely preserve the inherent color structure of the color image and the color video. We exploit the standard eigenvalues of the quaternion matrix to compute its spectral decomposition and calculate the corresponding Q-DMD modes and eigenvalues. The results on the publicly available benchmark datasets prove that our Q-DMD outperforms the exact DMD method, and experiment results also demonstrate that the performance of our approach is comparable to that of the state-of-the-art ones.
|
1507.05169
|
Alexander Spiegelman
|
Alexander Spiegelman, Yuval Cassuto, Gregory Chockler, and Idit Keidar
|
Space Bounds for Reliable Storage: Fundamental Limits of Coding
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the inherent space requirements of shared storage algorithms in
asynchronous fault-prone systems. Previous works use codes to achieve a better
storage cost than the well-known replication approach. However, a closer look
reveals that they incur extra costs somewhere else: Some use unbounded storage
in communication links, while others assume bounded concurrency or synchronous
periods. We prove here that this is inherent, and indeed, if there is no bound
on the concurrency level, then the storage cost of any reliable storage
algorithm is at least f+1 times the data size, where f is the number of
tolerated failures. We further present a technique for combining erasure-codes
with full replication so as to obtain the best of both. We present a storage
algorithm whose storage cost is close to the lower bound in the worst case, and
adapts to the concurrency level.
|
[
{
"created": "Sat, 18 Jul 2015 10:25:18 GMT",
"version": "v1"
}
] |
2015-07-21
|
[
[
"Spiegelman",
"Alexander",
""
],
[
"Cassuto",
"Yuval",
""
],
[
"Chockler",
"Gregory",
""
],
[
"Keidar",
"Idit",
""
]
] |
We study the inherent space requirements of shared storage algorithms in asynchronous fault-prone systems. Previous works use codes to achieve a better storage cost than the well-known replication approach. However, a closer look reveals that they incur extra costs somewhere else: Some use unbounded storage in communication links, while others assume bounded concurrency or synchronous periods. We prove here that this is inherent, and indeed, if there is no bound on the concurrency level, then the storage cost of any reliable storage algorithm is at least f+1 times the data size, where f is the number of tolerated failures. We further present a technique for combining erasure-codes with full replication so as to obtain the best of both. We present a storage algorithm whose storage cost is close to the lower bound in the worst case, and adapts to the concurrency level.
|
1907.02678
|
Jingling Yuan
|
Yang Cao, Jingling Yuan, Song Xiao, Qing Xie
|
TPM: A GPS-based Trajectory Pattern Mining System
| null | null | null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of big data and artificial intelligence, the technology
of urban computing becomes more mature and widely used. In urban computing,
using GPS-based trajectory data to discover urban dense areas, extract similar
urban trajectories, predict urban traffic, and solve traffic congestion
problems are all important issues. This paper presents a GPS-based trajectory
pattern mining system called TPM. Firstly, the TPM can mine urban dense areas
via clustering the spatial-temporal data, and automatically generate
trajectories after the timing trajectory identification. Mainly, we propose a
method for trajectory similarity matching, and similar trajectories can be
extracted via the trajectory similarity matching in this system. The TPM can be
applied to the trajectory system equipped with the GPS device, such as the
vehicle trajectory, the bicycle trajectory, the electronic bracelet trajectory,
etc., to provide services for traffic navigation and journey recommendation.
Meantime, the system can provide support in the decision for urban resource
allocation, urban functional region identification, traffic congestion and so
on.
|
[
{
"created": "Fri, 5 Jul 2019 04:58:10 GMT",
"version": "v1"
}
] |
2019-07-08
|
[
[
"Cao",
"Yang",
""
],
[
"Yuan",
"Jingling",
""
],
[
"Xiao",
"Song",
""
],
[
"Xie",
"Qing",
""
]
] |
With the development of big data and artificial intelligence, the technology of urban computing becomes more mature and widely used. In urban computing, using GPS-based trajectory data to discover urban dense areas, extract similar urban trajectories, predict urban traffic, and solve traffic congestion problems are all important issues. This paper presents a GPS-based trajectory pattern mining system called TPM. Firstly, the TPM can mine urban dense areas via clustering the spatial-temporal data, and automatically generate trajectories after the timing trajectory identification. Mainly, we propose a method for trajectory similarity matching, and similar trajectories can be extracted via the trajectory similarity matching in this system. The TPM can be applied to the trajectory system equipped with the GPS device, such as the vehicle trajectory, the bicycle trajectory, the electronic bracelet trajectory, etc., to provide services for traffic navigation and journey recommendation. Meantime, the system can provide support in the decision for urban resource allocation, urban functional region identification, traffic congestion and so on.
|
2306.04862
|
Jingyue Li Prof.
|
Carl Smestad (1) and Jingyue Li (2) ((1) Norwegian University of
Science and Technology, (2) Norwegian University of Science and Technology)
|
A Systematic Literature Review on Client Selection in Federated Learning
| null | null |
10.1145/3593434.3593438
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the arising concerns of privacy within machine learning, federated
learning (FL) was invented in 2017, in which the clients, such as mobile
devices, train a model and send the update to the centralized server. Choosing
clients randomly for FL can harm learning performance due to different reasons.
Many studies have proposed approaches to address the challenges of client
selection of FL. However, no systematic literature review (SLR) on this topic
existed. This SLR investigates the state of the art of client selection in FL
and answers the challenges, solutions, and metrics to evaluate the solutions.
We systematically reviewed 47 primary studies. The main challenges found in
client selection are heterogeneity, resource allocation, communication costs,
and fairness. The client selection schemes aim to improve the original random
selection algorithm by focusing on one or several of the aforementioned
challenges. The most common metric used is testing accuracy versus
communication rounds, as testing accuracy measures the successfulness of the
learning and preferably in as few communication rounds as possible, as they are
very expensive. Although several possible improvements can be made with the
current state of client selection, the most beneficial ones are evaluating the
impact of unsuccessful clients and gaining a more theoretical understanding of
the impact of fairness in FL.
|
[
{
"created": "Thu, 8 Jun 2023 01:26:22 GMT",
"version": "v1"
}
] |
2023-06-09
|
[
[
"Smestad",
"Carl",
""
],
[
"Li",
"Jingyue",
""
]
] |
With the arising concerns of privacy within machine learning, federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server. Choosing clients randomly for FL can harm learning performance due to different reasons. Many studies have proposed approaches to address the challenges of client selection of FL. However, no systematic literature review (SLR) on this topic existed. This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions. We systematically reviewed 47 primary studies. The main challenges found in client selection are heterogeneity, resource allocation, communication costs, and fairness. The client selection schemes aim to improve the original random selection algorithm by focusing on one or several of the aforementioned challenges. The most common metric used is testing accuracy versus communication rounds, as testing accuracy measures the successfulness of the learning and preferably in as few communication rounds as possible, as they are very expensive. Although several possible improvements can be made with the current state of client selection, the most beneficial ones are evaluating the impact of unsuccessful clients and gaining a more theoretical understanding of the impact of fairness in FL.
|
2310.09749
|
Yifeng Xiong
|
Yifeng Xiong, Fan Liu, Kai Wan, Weijie Yuan, Yuanhao Cui, and Giuseppe
Caire
|
From Torch to Projector: Fundamental Tradeoff of Integrated Sensing and
Communications
|
15 pages, 11 figures, submitted to IEEE BITS the Information Theory
Magazine
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Sensing and communications (S&C) have been historically developed in
parallel. In recent decade, they have been evolving from separation to
integration, giving rise to the integrated sensing and communications (ISAC)
paradigm, that has been recognized as one of the six key 6G usage scenarios.
Despite the plethora of research works dedicated to ISAC signal processing, the
fundamental performance limits of S&C remain widely unexplored in an ISAC
system. In this tutorial paper, we attempt to summarize the recent research
findings in characterizing the performance boundary of ISAC systems and the
resulting S&C tradeoff from an information-theoretical viewpoint. We begin with
a folklore "torch metaphor" that depicts the resource competition mechanism of
S&C. Then, we elaborate on the fundamental capacity-distortion (C-D) theory,
indicating the incompleteness of this metaphor. Towards that end, we further
elaborate on the S&C tradeoff by discussing a special case within the C-D
framework, namely the Cramer-Rao bound (CRB)-rate region. In particular, S&C
have preference discrepancies over both the subspace occupied by the
transmitted signal and the adopted codebook, leading to a "projector metaphor"
complementary to the ISAC torch analogy. We also present two practical design
examples by leveraging the lessons learned from fundamental theories. Finally,
we conclude the paper by identifying a number of open challenges.
|
[
{
"created": "Sun, 15 Oct 2023 06:14:49 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 12:30:20 GMT",
"version": "v2"
}
] |
2023-10-18
|
[
[
"Xiong",
"Yifeng",
""
],
[
"Liu",
"Fan",
""
],
[
"Wan",
"Kai",
""
],
[
"Yuan",
"Weijie",
""
],
[
"Cui",
"Yuanhao",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
Sensing and communications (S&C) have been historically developed in parallel. In recent decade, they have been evolving from separation to integration, giving rise to the integrated sensing and communications (ISAC) paradigm, that has been recognized as one of the six key 6G usage scenarios. Despite the plethora of research works dedicated to ISAC signal processing, the fundamental performance limits of S&C remain widely unexplored in an ISAC system. In this tutorial paper, we attempt to summarize the recent research findings in characterizing the performance boundary of ISAC systems and the resulting S&C tradeoff from an information-theoretical viewpoint. We begin with a folklore "torch metaphor" that depicts the resource competition mechanism of S&C. Then, we elaborate on the fundamental capacity-distortion (C-D) theory, indicating the incompleteness of this metaphor. Towards that end, we further elaborate on the S&C tradeoff by discussing a special case within the C-D framework, namely the Cramer-Rao bound (CRB)-rate region. In particular, S&C have preference discrepancies over both the subspace occupied by the transmitted signal and the adopted codebook, leading to a "projector metaphor" complementary to the ISAC torch analogy. We also present two practical design examples by leveraging the lessons learned from fundamental theories. Finally, we conclude the paper by identifying a number of open challenges.
|
1610.05531
|
Tobias Fiebig
|
Tobias Fiebig, Franziska Lichtblau, Florian Streibelt, Thorben
Krueger, Pieter Lexis, Randy Bush and Anja Feldmann
|
SoK: An Analysis of Protocol Design: Avoiding Traps for Implementation
and Deployment
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's Internet utilizes a multitude of different protocols. While some of
these protocols were first implemented and used and later documented, other
were first specified and then implemented. Regardless of how protocols came to
be, their definitions can contain traps that lead to insecure implementations
or deployments. A classical example is insufficiently strict authentication
requirements in a protocol specification. The resulting Misconfigurations,
i.e., not enabling strong authentication, are common root causes for Internet
security incidents. Indeed, Internet protocols have been commonly designed
without security in mind which leads to a multitude of misconfiguration traps.
While this is slowly changing, to strict security considerations can have a
similarly bad effect. Due to complex implementations and insufficient
documentation, security features may remain unused, leaving deployments
vulnerable.
In this paper we provide a systematization of the security traps found in
common Internet protocols. By separating protocols in four classes we identify
major factors that lead to common security traps. These insights together with
observations about end-user centric usability and security by default are then
used to derive recommendations for improving existing and designing new
protocols---without such security sensitive traps for operators, implementors
and users.
|
[
{
"created": "Tue, 18 Oct 2016 10:57:22 GMT",
"version": "v1"
}
] |
2016-10-19
|
[
[
"Fiebig",
"Tobias",
""
],
[
"Lichtblau",
"Franziska",
""
],
[
"Streibelt",
"Florian",
""
],
[
"Krueger",
"Thorben",
""
],
[
"Lexis",
"Pieter",
""
],
[
"Bush",
"Randy",
""
],
[
"Feldmann",
"Anja",
""
]
] |
Today's Internet utilizes a multitude of different protocols. While some of these protocols were first implemented and used and later documented, other were first specified and then implemented. Regardless of how protocols came to be, their definitions can contain traps that lead to insecure implementations or deployments. A classical example is insufficiently strict authentication requirements in a protocol specification. The resulting Misconfigurations, i.e., not enabling strong authentication, are common root causes for Internet security incidents. Indeed, Internet protocols have been commonly designed without security in mind which leads to a multitude of misconfiguration traps. While this is slowly changing, to strict security considerations can have a similarly bad effect. Due to complex implementations and insufficient documentation, security features may remain unused, leaving deployments vulnerable. In this paper we provide a systematization of the security traps found in common Internet protocols. By separating protocols in four classes we identify major factors that lead to common security traps. These insights together with observations about end-user centric usability and security by default are then used to derive recommendations for improving existing and designing new protocols---without such security sensitive traps for operators, implementors and users.
|
2212.01365
|
Hong Jun Jeon
|
Hong Jun Jeon, Benjamin Van Roy
|
An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We study the compute-optimal trade-off between model and training data set
sizes for large neural networks. Our result suggests a linear relation similar
to that supported by the empirical analysis of chinchilla. While that work
studies transformer-based large language models trained on the MassiveText
corpus gopher, as a starting point for development of a mathematical theory, we
focus on a simpler learning model and data generating process, each based on a
neural network with a sigmoidal output unit and single hidden layer of ReLU
activation units. We introduce general error upper bounds for a class of
algorithms which incrementally update a statistic (for example gradient
descent). For a particular learning model inspired by barron 1993, we establish
an upper bound on the minimal information-theoretically achievable expected
error as a function of model and data set sizes. We then derive allocations of
computation that minimize this bound. We present empirical results which
suggest that this approximation correctly identifies an asymptotic linear
compute-optimal scaling. This approximation also generates new insights. Among
other things, it suggests that, as the input dimension or latent space
complexity grows, as might be the case for example if a longer history of
tokens is taken as input to a language model, a larger fraction of the compute
budget should be allocated to growing the learning model rather than training
data.
|
[
{
"created": "Fri, 2 Dec 2022 18:46:41 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2023 20:53:04 GMT",
"version": "v2"
}
] |
2023-10-20
|
[
[
"Jeon",
"Hong Jun",
""
],
[
"Van Roy",
"Benjamin",
""
]
] |
We study the compute-optimal trade-off between model and training data set sizes for large neural networks. Our result suggests a linear relation similar to that supported by the empirical analysis of chinchilla. While that work studies transformer-based large language models trained on the MassiveText corpus gopher, as a starting point for development of a mathematical theory, we focus on a simpler learning model and data generating process, each based on a neural network with a sigmoidal output unit and single hidden layer of ReLU activation units. We introduce general error upper bounds for a class of algorithms which incrementally update a statistic (for example gradient descent). For a particular learning model inspired by barron 1993, we establish an upper bound on the minimal information-theoretically achievable expected error as a function of model and data set sizes. We then derive allocations of computation that minimize this bound. We present empirical results which suggest that this approximation correctly identifies an asymptotic linear compute-optimal scaling. This approximation also generates new insights. Among other things, it suggests that, as the input dimension or latent space complexity grows, as might be the case for example if a longer history of tokens is taken as input to a language model, a larger fraction of the compute budget should be allocated to growing the learning model rather than training data.
|
2311.13168
|
Jianwei Feng
|
Jianwei Feng and Prateek Singhal
|
3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization
| null |
WACV 2024
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Style transfer for human face has been widely researched in recent years.
Majority of the existing approaches work in 2D image domain and have 3D
inconsistency issue when applied on different viewpoints of the same face. In
this paper, we tackle the problem of 3D face style transfer which aims at
generating stylized novel views of a 3D human face with multi-view consistency.
We propose to use a neural radiance field (NeRF) to represent 3D human face and
combine it with 2D style transfer to stylize the 3D face. We find that directly
training a NeRF on stylized images from 2D style transfer brings in 3D
inconsistency issue and causes blurriness. On the other hand, training a NeRF
jointly with 2D style transfer objectives shows poor convergence due to the
identity and head pose gap between style image and content image. It also poses
challenge in training time and memory due to the need of volume rendering for
full image to apply style transfer loss functions. We therefore propose a
hybrid framework of NeRF and mesh rasterization to combine the benefits of high
fidelity geometry reconstruction of NeRF and fast rendering speed of mesh. Our
framework consists of three stages: 1. Training a NeRF model on input face
images to learn the 3D geometry; 2. Extracting a mesh from the trained NeRF
model and optimizing it with style transfer objectives via differentiable
rasterization; 3. Training a new color network in NeRF conditioned on a style
embedding to enable arbitrary style transfer to the 3D face. Experiment results
show that our approach generates high quality face style transfer with great 3D
consistency, while also enabling a flexible style control.
|
[
{
"created": "Wed, 22 Nov 2023 05:24:35 GMT",
"version": "v1"
}
] |
2023-11-23
|
[
[
"Feng",
"Jianwei",
""
],
[
"Singhal",
"Prateek",
""
]
] |
Style transfer for human face has been widely researched in recent years. Majority of the existing approaches work in 2D image domain and have 3D inconsistency issue when applied on different viewpoints of the same face. In this paper, we tackle the problem of 3D face style transfer which aims at generating stylized novel views of a 3D human face with multi-view consistency. We propose to use a neural radiance field (NeRF) to represent 3D human face and combine it with 2D style transfer to stylize the 3D face. We find that directly training a NeRF on stylized images from 2D style transfer brings in 3D inconsistency issue and causes blurriness. On the other hand, training a NeRF jointly with 2D style transfer objectives shows poor convergence due to the identity and head pose gap between style image and content image. It also poses challenge in training time and memory due to the need of volume rendering for full image to apply style transfer loss functions. We therefore propose a hybrid framework of NeRF and mesh rasterization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast rendering speed of mesh. Our framework consists of three stages: 1. Training a NeRF model on input face images to learn the 3D geometry; 2. Extracting a mesh from the trained NeRF model and optimizing it with style transfer objectives via differentiable rasterization; 3. Training a new color network in NeRF conditioned on a style embedding to enable arbitrary style transfer to the 3D face. Experiment results show that our approach generates high quality face style transfer with great 3D consistency, while also enabling a flexible style control.
|
1805.11384
|
Bicheng Ying
|
Bicheng Ying and Kun Yuan and Ali H. Sayed
|
Supervised Learning Under Distributed Features
| null | null |
10.1109/TSP.2018.2881661
| null |
cs.MA cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies the problem of learning under both large datasets and
large-dimensional feature space scenarios. The feature information is assumed
to be spread across agents in a network, where each agent observes some of the
features. Through local cooperation, the agents are supposed to interact with
each other to solve an inference problem and converge towards the global
minimizer of an empirical risk. We study this problem exclusively in the primal
domain, and propose new and effective distributed solutions with guaranteed
convergence to the minimizer with linear rate under strong convexity. This is
achieved by combining a dynamic diffusion construction, a pipeline strategy,
and variance-reduced techniques. Simulation results illustrate the conclusions.
|
[
{
"created": "Tue, 29 May 2018 12:25:37 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Nov 2018 08:17:47 GMT",
"version": "v2"
},
{
"created": "Fri, 22 May 2020 18:06:47 GMT",
"version": "v3"
}
] |
2020-05-26
|
[
[
"Ying",
"Bicheng",
""
],
[
"Yuan",
"Kun",
""
],
[
"Sayed",
"Ali H.",
""
]
] |
This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios. The feature information is assumed to be spread across agents in a network, where each agent observes some of the features. Through local cooperation, the agents are supposed to interact with each other to solve an inference problem and converge towards the global minimizer of an empirical risk. We study this problem exclusively in the primal domain, and propose new and effective distributed solutions with guaranteed convergence to the minimizer with linear rate under strong convexity. This is achieved by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques. Simulation results illustrate the conclusions.
|
1104.5059
|
Mitchell Bloch
|
Mitchell Keith Bloch
|
Reducing Commitment to Tasks with Off-Policy Hierarchical Reinforcement
Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In experimenting with off-policy temporal difference (TD) methods in
hierarchical reinforcement learning (HRL) systems, we have observed unwanted
on-policy learning under reproducible conditions. Here we present modifications
to several TD methods that prevent unintentional on-policy learning from
occurring. These modifications create a tension between exploration and
learning. Traditional TD methods require commitment to finishing subtasks
without exploration in order to update Q-values for early actions with high
probability. One-step intra-option learning and temporal second difference
traces (TSDT) do not suffer from this limitation. We demonstrate that our HRL
system is efficient without commitment to completion of subtasks in a
cliff-walking domain, contrary to a widespread claim in the literature that it
is critical for efficiency of learning. Furthermore, decreasing commitment as
exploration progresses is shown to improve both online performance and the
resultant policy in the taxicab domain, opening a new avenue for research into
when it is more beneficial to continue with the current subtask or to replan.
|
[
{
"created": "Wed, 27 Apr 2011 00:58:52 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Bloch",
"Mitchell Keith",
""
]
] |
In experimenting with off-policy temporal difference (TD) methods in hierarchical reinforcement learning (HRL) systems, we have observed unwanted on-policy learning under reproducible conditions. Here we present modifications to several TD methods that prevent unintentional on-policy learning from occurring. These modifications create a tension between exploration and learning. Traditional TD methods require commitment to finishing subtasks without exploration in order to update Q-values for early actions with high probability. One-step intra-option learning and temporal second difference traces (TSDT) do not suffer from this limitation. We demonstrate that our HRL system is efficient without commitment to completion of subtasks in a cliff-walking domain, contrary to a widespread claim in the literature that it is critical for efficiency of learning. Furthermore, decreasing commitment as exploration progresses is shown to improve both online performance and the resultant policy in the taxicab domain, opening a new avenue for research into when it is more beneficial to continue with the current subtask or to replan.
|
2110.08616
|
Zhihao Zhang
|
Zhihao Zhang, Zhihao Jia
|
GradSign: Model Performance Inference with Theoretical Insights
| null |
The Tenth International Conference on Learning Representations
(ICLR 2022)
| null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A key challenge in neural architecture search (NAS) is quickly inferring the
predictive performance of a broad spectrum of networks to discover
statistically accurate and computationally efficient ones. We refer to this
task as model performance inference (MPI). The current practice for efficient
MPI is gradient-based methods that leverage the gradients of a network at
initialization to infer its performance. However, existing gradient-based
methods rely only on heuristic metrics and lack the necessary theoretical
foundations to consolidate their designs. We propose GradSign, an accurate,
simple, and flexible metric for model performance inference with theoretical
insights. The key idea behind GradSign is a quantity {\Psi} to analyze the
optimization landscape of different networks at the granularity of individual
training samples. Theoretically, we show that both the network's training and
true population losses are proportionally upper-bounded by {\Psi} under
reasonable assumptions. In addition, we design GradSign, an accurate and simple
approximation of {\Psi} using the gradients of a network evaluated at a random
initialization state. Evaluation on seven NAS benchmarks across three training
datasets shows that GradSign generalizes well to real-world networks and
consistently outperforms state-of-the-art gradient-based methods for MPI
evaluated by Spearman's {\rho} and Kendall's Tau. Additionally, we integrate
GradSign into four existing NAS algorithms and show that the GradSign-assisted
NAS algorithms outperform their vanilla counterparts by improving the
accuracies of best-discovered networks by up to 0.3%, 1.1%, and 1.0% on three
real-world tasks.
|
[
{
"created": "Sat, 16 Oct 2021 17:03:10 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Jun 2022 19:34:43 GMT",
"version": "v2"
}
] |
2022-06-22
|
[
[
"Zhang",
"Zhihao",
""
],
[
"Jia",
"Zhihao",
""
]
] |
A key challenge in neural architecture search (NAS) is quickly inferring the predictive performance of a broad spectrum of networks to discover statistically accurate and computationally efficient ones. We refer to this task as model performance inference (MPI). The current practice for efficient MPI is gradient-based methods that leverage the gradients of a network at initialization to infer its performance. However, existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical foundations to consolidate their designs. We propose GradSign, an accurate, simple, and flexible metric for model performance inference with theoretical insights. The key idea behind GradSign is a quantity {\Psi} to analyze the optimization landscape of different networks at the granularity of individual training samples. Theoretically, we show that both the network's training and true population losses are proportionally upper-bounded by {\Psi} under reasonable assumptions. In addition, we design GradSign, an accurate and simple approximation of {\Psi} using the gradients of a network evaluated at a random initialization state. Evaluation on seven NAS benchmarks across three training datasets shows that GradSign generalizes well to real-world networks and consistently outperforms state-of-the-art gradient-based methods for MPI evaluated by Spearman's {\rho} and Kendall's Tau. Additionally, we integrate GradSign into four existing NAS algorithms and show that the GradSign-assisted NAS algorithms outperform their vanilla counterparts by improving the accuracies of best-discovered networks by up to 0.3%, 1.1%, and 1.0% on three real-world tasks.
|
1704.03225
|
Lukas Mosser
|
Lukas Mosser, Olivier Dubrule, Martin J. Blunt
|
Reconstruction of three-dimensional porous media using generative
adversarial neural networks
|
21 pages, 20 figures
|
Phys. Rev. E 96, 043309 (2017)
|
10.1103/PhysRevE.96.043309
| null |
cs.CV cond-mat.mtrl-sci physics.flu-dyn physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To evaluate the variability of multi-phase flow properties of porous media at
the pore scale, it is necessary to acquire a number of representative samples
of the void-solid structure. While modern x-ray computer tomography has made it
possible to extract three-dimensional images of the pore space, assessment of
the variability in the inherent material properties is often experimentally not
feasible. We present a novel method to reconstruct the solid-void structure of
porous media by applying a generative neural network that allows an implicit
description of the probability distribution represented by three-dimensional
image datasets. We show, by using an adversarial learning approach for neural
networks, that this method of unsupervised learning is able to generate
representative samples of porous media that honor their statistics. We
successfully compare measures of pore morphology, such as the Euler
characteristic, two-point statistics and directional single-phase permeability
of synthetic realizations with the calculated properties of a bead pack, Berea
sandstone, and Ketton limestone. Results show that GANs can be used to
reconstruct high-resolution three-dimensional images of porous media at
different scales that are representative of the morphology of the images used
to train the neural network. The fully convolutional nature of the trained
neural network allows the generation of large samples while maintaining
computational efficiency. Compared to classical stochastic methods of image
reconstruction, the implicit representation of the learned data distribution
can be stored and reused to generate multiple realizations of the pore
structure very rapidly.
|
[
{
"created": "Tue, 11 Apr 2017 09:55:55 GMT",
"version": "v1"
}
] |
2017-11-01
|
[
[
"Mosser",
"Lukas",
""
],
[
"Dubrule",
"Olivier",
""
],
[
"Blunt",
"Martin J.",
""
]
] |
To evaluate the variability of multi-phase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a novel method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image datasets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that GANs can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.
|
2204.04832
|
Nina Klobas
|
Thekla Hamm and Nina Klobas and George B. Mertzios and Paul G.
Spirakis
|
The Complexity of Temporal Vertex Cover in Small-Degree Graphs
|
Changes to section 4.2.2
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal graphs naturally model graphs whose underlying topology changes over
time. Recently, the problems TEMPORAL VERTEX COVER (or TVC) and SLIDING-WINDOW
TEMPORAL VERTEX COVER(or $\Delta$-TVC for time-windows of a fixed-length
$\Delta$) have been established as natural extensions of the classic problem
VERTEX COVER on static graphs with connections to areas such as surveillance in
sensor networks. In this paper we initiate a systematic study of the complexity
of TVC and $\Delta$-TVC on sparse graphs. Our main result shows that for every
$\Delta\geq 2$, $\Delta$-TVC is NP-hard even when the underlying topology is
described by a path or a cycle. This resolves an open problem from literature
and shows a surprising contrast between $\Delta$-TVC and TVC for which we
provide a polynomial-time algorithm in the same setting. To circumvent this
hardness, we present a number of exact and approximation algorithms for
temporal graphs whose underlying topologies are given by a path, that have
bounded vertex degree in every time step, or that admit a small-sized temporal
vertex cover.
|
[
{
"created": "Mon, 11 Apr 2022 02:31:00 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2024 06:21:45 GMT",
"version": "v2"
}
] |
2024-03-22
|
[
[
"Hamm",
"Thekla",
""
],
[
"Klobas",
"Nina",
""
],
[
"Mertzios",
"George B.",
""
],
[
"Spirakis",
"Paul G.",
""
]
] |
Temporal graphs naturally model graphs whose underlying topology changes over time. Recently, the problems TEMPORAL VERTEX COVER (or TVC) and SLIDING-WINDOW TEMPORAL VERTEX COVER(or $\Delta$-TVC for time-windows of a fixed-length $\Delta$) have been established as natural extensions of the classic problem VERTEX COVER on static graphs with connections to areas such as surveillance in sensor networks. In this paper we initiate a systematic study of the complexity of TVC and $\Delta$-TVC on sparse graphs. Our main result shows that for every $\Delta\geq 2$, $\Delta$-TVC is NP-hard even when the underlying topology is described by a path or a cycle. This resolves an open problem from literature and shows a surprising contrast between $\Delta$-TVC and TVC for which we provide a polynomial-time algorithm in the same setting. To circumvent this hardness, we present a number of exact and approximation algorithms for temporal graphs whose underlying topologies are given by a path, that have bounded vertex degree in every time step, or that admit a small-sized temporal vertex cover.
|
2208.01892
|
Andrea Fronzetti Colladon PhD
|
C. Piselli, A. Fronzetti Colladon, L. Segneri, A. L. Pisello
|
Evaluating and improving social awareness of energy communities through
semantic network analysis of online news
| null |
Renewable and Sustainable Energy Reviews 167, 112792 (2022)
|
10.1016/j.rser.2022.112792
| null |
cs.SI cs.CL physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The implementation of energy communities represents a cross-disciplinary
phenomenon that has the potential to support the energy transition while
fostering citizens' participation throughout the energy system and their
exploitation of renewables. An important role is played by online information
sources in engaging people in this process and increasing their awareness of
associated benefits. In this view, this work analyses online news data on
energy communities to understand people's awareness and the media importance of
this topic. We use the Semantic Brand Score (SBS) indicator as an innovative
measure of semantic importance, combining social network analysis and text
mining methods. Results show different importance trends for energy communities
and other energy and society-related topics, also allowing the identification
of their connections. Our approach gives evidence to information gaps and
possible actions that could be taken to promote a low-carbon energy transition.
|
[
{
"created": "Wed, 3 Aug 2022 07:43:31 GMT",
"version": "v1"
}
] |
2022-08-04
|
[
[
"Piselli",
"C.",
""
],
[
"Colladon",
"A. Fronzetti",
""
],
[
"Segneri",
"L.",
""
],
[
"Pisello",
"A. L.",
""
]
] |
The implementation of energy communities represents a cross-disciplinary phenomenon that has the potential to support the energy transition while fostering citizens' participation throughout the energy system and their exploitation of renewables. An important role is played by online information sources in engaging people in this process and increasing their awareness of associated benefits. In this view, this work analyses online news data on energy communities to understand people's awareness and the media importance of this topic. We use the Semantic Brand Score (SBS) indicator as an innovative measure of semantic importance, combining social network analysis and text mining methods. Results show different importance trends for energy communities and other energy and society-related topics, also allowing the identification of their connections. Our approach gives evidence to information gaps and possible actions that could be taken to promote a low-carbon energy transition.
|
2006.10643
|
Nikolaos Karalias
|
Nikolaos Karalias, Andreas Loukas
|
Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial
Optimization on Graphs
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combinatorial optimization problems are notoriously challenging for neural
networks, especially in the absence of labeled instances. This work proposes an
unsupervised learning framework for CO problems on graphs that can provide
integral solutions of certified quality. Inspired by Erdos' probabilistic
method, we use a neural network to parametrize a probability distribution over
sets. Crucially, we show that when the network is optimized w.r.t. a suitably
chosen loss, the learned distribution contains, with controlled probability, a
low-cost integral solution that obeys the constraints of the combinatorial
problem. The probabilistic proof of existence is then derandomized to decode
the desired solutions. We demonstrate the efficacy of this approach to obtain
valid solutions to the maximum clique problem and to perform local graph
clustering. Our method achieves competitive results on both real datasets and
synthetic hard instances.
|
[
{
"created": "Thu, 18 Jun 2020 16:13:36 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jun 2020 15:58:55 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Nov 2020 19:42:18 GMT",
"version": "v3"
},
{
"created": "Sun, 7 Mar 2021 20:10:53 GMT",
"version": "v4"
}
] |
2021-03-09
|
[
[
"Karalias",
"Nikolaos",
""
],
[
"Loukas",
"Andreas",
""
]
] |
Combinatorial optimization problems are notoriously challenging for neural networks, especially in the absence of labeled instances. This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality. Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets. Crucially, we show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem. The probabilistic proof of existence is then derandomized to decode the desired solutions. We demonstrate the efficacy of this approach to obtain valid solutions to the maximum clique problem and to perform local graph clustering. Our method achieves competitive results on both real datasets and synthetic hard instances.
|
2106.12131
|
Mana Ihori
|
Mana Ihori, Naoki Makishima, Tomohiro Tanaka, Akihiko Takashima, Shota
Orihashi, Ryo Masumura
|
Zero-Shot Joint Modeling of Multiple Spoken-Text-Style Conversion Tasks
using Switching Tokens
|
Accepted at INTERSPEECH 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel spoken-text-style conversion method that
can simultaneously execute multiple style conversion modules such as
punctuation restoration and disfluency deletion without preparing matched
datasets. In practice, transcriptions generated by automatic speech recognition
systems are not highly readable because they often include many disfluencies
and do not include punctuation marks. To improve their readability, multiple
spoken-text-style conversion modules that individually model a single
conversion task are cascaded because matched datasets that simultaneously
handle multiple conversion tasks are often unavailable. However, the cascading
is unstable against the order of tasks because of the chain of conversion
errors. Besides, the computation cost of the cascading must be higher than the
single conversion. To execute multiple conversion tasks simultaneously without
preparing matched datasets, our key idea is to distinguish individual
conversion tasks using the on-off switch. In our proposed zero-shot joint
modeling, we switch the individual tasks using multiple switching tokens,
enabling us to utilize a zero-shot learning approach to executing simultaneous
conversions. Our experiments on joint modeling of disfluency deletion and
punctuation restoration demonstrate the effectiveness of our method.
|
[
{
"created": "Wed, 23 Jun 2021 02:53:14 GMT",
"version": "v1"
}
] |
2021-06-24
|
[
[
"Ihori",
"Mana",
""
],
[
"Makishima",
"Naoki",
""
],
[
"Tanaka",
"Tomohiro",
""
],
[
"Takashima",
"Akihiko",
""
],
[
"Orihashi",
"Shota",
""
],
[
"Masumura",
"Ryo",
""
]
] |
In this paper, we propose a novel spoken-text-style conversion method that can simultaneously execute multiple style conversion modules such as punctuation restoration and disfluency deletion without preparing matched datasets. In practice, transcriptions generated by automatic speech recognition systems are not highly readable because they often include many disfluencies and do not include punctuation marks. To improve their readability, multiple spoken-text-style conversion modules that individually model a single conversion task are cascaded because matched datasets that simultaneously handle multiple conversion tasks are often unavailable. However, the cascading is unstable against the order of tasks because of the chain of conversion errors. Besides, the computation cost of the cascading must be higher than the single conversion. To execute multiple conversion tasks simultaneously without preparing matched datasets, our key idea is to distinguish individual conversion tasks using the on-off switch. In our proposed zero-shot joint modeling, we switch the individual tasks using multiple switching tokens, enabling us to utilize a zero-shot learning approach to executing simultaneous conversions. Our experiments on joint modeling of disfluency deletion and punctuation restoration demonstrate the effectiveness of our method.
|
2007.13278
|
R Devon Hjelm
|
R Devon Hjelm and Philip Bachman
|
Representation Learning with Video Deep InfoMax
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has made unsupervised pretraining relevant again for
difficult computer vision tasks. The most effective self-supervised methods
involve prediction tasks based on features extracted from diverse views of the
data. DeepInfoMax (DIM) is a self-supervised method which leverages the
internal structure of deep networks to construct such views, forming prediction
tasks between local features which depend on small patches in an image and
global features which depend on the whole image. In this paper, we extend DIM
to the video domain by leveraging similar structure in spatio-temporal
networks, producing a method we call Video Deep InfoMax(VDIM). We find that
drawing views from both natural-rate sequences and temporally-downsampled
sequences yields results on Kinetics-pretrained action recognition tasks which
match or outperform prior state-of-the-art methods that use more costly
large-time-scale transformer models. We also examine the effects of data
augmentation and fine-tuning methods, accomplishingSoTA by a large margin when
training only on the UCF-101 dataset.
|
[
{
"created": "Mon, 27 Jul 2020 02:28:47 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2020 01:27:14 GMT",
"version": "v2"
}
] |
2020-07-29
|
[
[
"Hjelm",
"R Devon",
""
],
[
"Bachman",
"Philip",
""
]
] |
Self-supervised learning has made unsupervised pretraining relevant again for difficult computer vision tasks. The most effective self-supervised methods involve prediction tasks based on features extracted from diverse views of the data. DeepInfoMax (DIM) is a self-supervised method which leverages the internal structure of deep networks to construct such views, forming prediction tasks between local features which depend on small patches in an image and global features which depend on the whole image. In this paper, we extend DIM to the video domain by leveraging similar structure in spatio-temporal networks, producing a method we call Video Deep InfoMax(VDIM). We find that drawing views from both natural-rate sequences and temporally-downsampled sequences yields results on Kinetics-pretrained action recognition tasks which match or outperform prior state-of-the-art methods that use more costly large-time-scale transformer models. We also examine the effects of data augmentation and fine-tuning methods, accomplishingSoTA by a large margin when training only on the UCF-101 dataset.
|
1906.07930
|
Jia-Wei Chen
|
Rongfang Wang, Jia-Wei Chen, Yule Wang, Licheng Jiao, Mi Wang
|
SAR Image Change Detection via Spatial Metric Learning with an Improved
Mahalanobis Distance
| null | null |
10.1109/LGRS.2019.2915251
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The log-ratio (LR) operator has been widely employed to generate the
difference image for synthetic aperture radar (SAR) image change detection.
However, the difference image generated by this pixel-wise operator can be
subject to SAR images speckle and unavoidable registration errors between
bitemporal SAR images. In this letter, we proposed a spatial metric learning
method to obtain a difference image more robust to the speckle by learning a
metric from a set of constraint pairs. In the proposed method, spatial context
is considered in constructing constraint pairs, each of which consists of
patches in the same location of bitemporal SAR images. Then, a semi-definite
positive metric matrix $\bf M$ can be obtained by the optimization with the
max-margin criterion. Finally, we verify our proposed method on four
challenging datasets of bitemporal SAR images. Experimental results demonstrate
that the difference map obtained by our proposed method outperforms than other
state-of-art methods.
|
[
{
"created": "Wed, 19 Jun 2019 06:10:58 GMT",
"version": "v1"
}
] |
2020-02-19
|
[
[
"Wang",
"Rongfang",
""
],
[
"Chen",
"Jia-Wei",
""
],
[
"Wang",
"Yule",
""
],
[
"Jiao",
"Licheng",
""
],
[
"Wang",
"Mi",
""
]
] |
The log-ratio (LR) operator has been widely employed to generate the difference image for synthetic aperture radar (SAR) image change detection. However, the difference image generated by this pixel-wise operator can be subject to SAR images speckle and unavoidable registration errors between bitemporal SAR images. In this letter, we proposed a spatial metric learning method to obtain a difference image more robust to the speckle by learning a metric from a set of constraint pairs. In the proposed method, spatial context is considered in constructing constraint pairs, each of which consists of patches in the same location of bitemporal SAR images. Then, a semi-definite positive metric matrix $\bf M$ can be obtained by the optimization with the max-margin criterion. Finally, we verify our proposed method on four challenging datasets of bitemporal SAR images. Experimental results demonstrate that the difference map obtained by our proposed method outperforms than other state-of-art methods.
|
1806.08337
|
Rena Bakhshi
|
Rena Bakhshi, Mary Hester, Jeroen Schot, Lode Kulik
|
Examining key features and platforms of IoT
|
11 pages, 7 figures, technical report
| null |
10.5281/zenodo.1296528
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
To help facilitate expertise in IoT technologies, NLeSC and SURF worked
together on a project focusing on IoT applications and platforms. The
information included in this case study show the results of NLeSC and SURF's
investigation, examining different features offered by cloud and
self-maintained IoT platforms with an overall summary of an IoT architecture.
|
[
{
"created": "Thu, 21 Jun 2018 17:25:18 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jun 2018 21:29:42 GMT",
"version": "v2"
}
] |
2018-06-26
|
[
[
"Bakhshi",
"Rena",
""
],
[
"Hester",
"Mary",
""
],
[
"Schot",
"Jeroen",
""
],
[
"Kulik",
"Lode",
""
]
] |
To help facilitate expertise in IoT technologies, NLeSC and SURF worked together on a project focusing on IoT applications and platforms. The information included in this case study show the results of NLeSC and SURF's investigation, examining different features offered by cloud and self-maintained IoT platforms with an overall summary of an IoT architecture.
|
2310.13328
|
Boqian Ma
|
Boqian Ma, Vir Nath Pathak, Lanping Liu, and Sushmita Ruj
|
One-Phase Batch Update on Sparse Merkle Trees for Rollups
|
21 pages, 8 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A sparse Merkle tree is a Merkle tree with fixed height and indexed leaves
given by a map from indices to leaf values. It allows for both efficient
membership and non-membership proofs. It has been widely used as an
authenticated data structure in various applications, such as layer-2 rollups
for blockchains. zkSync Lite, a popular Ethereum layer-2 rollup solution, uses
a sparse Merkle tree to represent the state of the layer-2 blockchain. The
account information is recorded in the leaves of the tree. In this paper, we
study the sparse Merkle tree algorithms presented in zkSync Lite, and propose
an efficient batch update algorithm to calculate a new root hash given a list
of account (leaf) operations. Using the construction in zkSync Lite as a
benchmark, our algorithm 1) improves the account update time from
$\mathcal{O}(\log n)$ to $\mathcal{O}(1)$ and 2) reduces the batch update cost
by half using a one-pass traversal. Empirical analysis of real-world block data
shows that our algorithm outperforms the benchmark by at most 14%.
|
[
{
"created": "Fri, 20 Oct 2023 07:43:54 GMT",
"version": "v1"
}
] |
2023-10-23
|
[
[
"Ma",
"Boqian",
""
],
[
"Pathak",
"Vir Nath",
""
],
[
"Liu",
"Lanping",
""
],
[
"Ruj",
"Sushmita",
""
]
] |
A sparse Merkle tree is a Merkle tree with fixed height and indexed leaves given by a map from indices to leaf values. It allows for both efficient membership and non-membership proofs. It has been widely used as an authenticated data structure in various applications, such as layer-2 rollups for blockchains. zkSync Lite, a popular Ethereum layer-2 rollup solution, uses a sparse Merkle tree to represent the state of the layer-2 blockchain. The account information is recorded in the leaves of the tree. In this paper, we study the sparse Merkle tree algorithms presented in zkSync Lite, and propose an efficient batch update algorithm to calculate a new root hash given a list of account (leaf) operations. Using the construction in zkSync Lite as a benchmark, our algorithm 1) improves the account update time from $\mathcal{O}(\log n)$ to $\mathcal{O}(1)$ and 2) reduces the batch update cost by half using a one-pass traversal. Empirical analysis of real-world block data shows that our algorithm outperforms the benchmark by at most 14%.
|
cs/0605016
|
Yingbin Liang
|
Yingbin Liang and Venugopal V. Veeravalli
|
Cooperative Relay Broadcast Channels
|
Submitted to the IEEE Transactions on Information Theory, July 2005
| null | null | null |
cs.IT math.IT
| null |
The capacity regions are investigated for two relay broadcast channels
(RBCs), where relay links are incorporated into standard two-user broadcast
channels to support user cooperation. In the first channel, the Partially
Cooperative Relay Broadcast Channel, only one user in the system can act as a
relay and transmit to the other user through a relay link. An achievable rate
region is derived based on the relay using the decode-and-forward scheme. An
outer bound on the capacity region is derived and is shown to be tighter than
the cut-set bound. For the special case where the Partially Cooperative RBC is
degraded, the achievable rate region is shown to be tight and provides the
capacity region. Gaussian Partially Cooperative RBCs and Partially Cooperative
RBCs with feedback are further studied. In the second channel model being
studied in the paper, the Fully Cooperative Relay Broadcast Channel, both users
can act as relay nodes and transmit to each other through relay links. This is
a more general model than the Partially Cooperative RBC. All the results for
Partially Cooperative RBCs are correspondingly generalized to the Fully
Cooperative RBCs. It is further shown that the AWGN Fully Cooperative RBC has a
larger achievable rate region than the AWGN Partially Cooperative RBC. The
results illustrate that relaying and user cooperation are powerful techniques
in improving the capacity of broadcast channels.
|
[
{
"created": "Thu, 4 May 2006 19:13:50 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Liang",
"Yingbin",
""
],
[
"Veeravalli",
"Venugopal V.",
""
]
] |
The capacity regions are investigated for two relay broadcast channels (RBCs), where relay links are incorporated into standard two-user broadcast channels to support user cooperation. In the first channel, the Partially Cooperative Relay Broadcast Channel, only one user in the system can act as a relay and transmit to the other user through a relay link. An achievable rate region is derived based on the relay using the decode-and-forward scheme. An outer bound on the capacity region is derived and is shown to be tighter than the cut-set bound. For the special case where the Partially Cooperative RBC is degraded, the achievable rate region is shown to be tight and provides the capacity region. Gaussian Partially Cooperative RBCs and Partially Cooperative RBCs with feedback are further studied. In the second channel model being studied in the paper, the Fully Cooperative Relay Broadcast Channel, both users can act as relay nodes and transmit to each other through relay links. This is a more general model than the Partially Cooperative RBC. All the results for Partially Cooperative RBCs are correspondingly generalized to the Fully Cooperative RBCs. It is further shown that the AWGN Fully Cooperative RBC has a larger achievable rate region than the AWGN Partially Cooperative RBC. The results illustrate that relaying and user cooperation are powerful techniques in improving the capacity of broadcast channels.
|
1603.02814
|
Chunhua Shen
|
Qi Wu, Chunhua Shen, Anton van den Hengel, Peng Wang, Anthony Dick
|
Image Captioning and Visual Question Answering Based on Attributes and
External Knowledge
|
14 pages. arXiv admin note: text overlap with arXiv:1511.06973
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Much recent progress in Vision-to-Language problems has been achieved through
a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural
Networks (RNNs). This approach does not explicitly represent high-level
semantic concepts, but rather seeks to progress directly from image features to
text. In this paper we first propose a method of incorporating high-level
concepts into the successful CNN-RNN approach, and show that it achieves a
significant improvement on the state-of-the-art in both image captioning and
visual question answering. We further show that the same mechanism can be used
to incorporate external knowledge, which is critically important for answering
high level visual questions. Specifically, we design a visual question
answering model that combines an internal representation of the content of an
image with information extracted from a general knowledge base to answer a
broad range of image-based questions. It particularly allows questions to be
asked about the contents of an image, even when the image itself does not
contain a complete answer. Our final model achieves the best reported results
on both image captioning and visual question answering on several benchmark
datasets.
|
[
{
"created": "Wed, 9 Mar 2016 08:56:45 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Dec 2016 11:44:34 GMT",
"version": "v2"
}
] |
2016-12-19
|
[
[
"Wu",
"Qi",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Wang",
"Peng",
""
],
[
"Dick",
"Anthony",
""
]
] |
Much recent progress in Vision-to-Language problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we first propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We further show that the same mechanism can be used to incorporate external knowledge, which is critically important for answering high level visual questions. Specifically, we design a visual question answering model that combines an internal representation of the content of an image with information extracted from a general knowledge base to answer a broad range of image-based questions. It particularly allows questions to be asked about the contents of an image, even when the image itself does not contain a complete answer. Our final model achieves the best reported results on both image captioning and visual question answering on several benchmark datasets.
|
2405.08852
|
Hao Wang
|
Hao Wang and Nao Li
|
A Click-Through Rate Prediction Method Based on Cross-Importance of
Multi-Order Features
| null | null | null | null |
cs.LG cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most current click-through rate prediction(CTR)models create explicit or
implicit high-order feature crosses through Hadamard product or inner product,
with little attention to the importance of feature crossing; only few models
are either limited to the second-order explicit feature crossing, implicitly to
high-order feature crossing, or can learn the importance of high-order explicit
feature crossing but fail to provide good interpretability for the model. This
paper proposes a new model, FiiNet (Multiple Order Feature Interaction
Importance Neural Networks). The model first uses the selective kernel network
(SKNet) to explicitly construct multi-order feature crosses. It dynamically
learns the importance of feature interaction combinations in a fine grained
manner, increasing the attention weight of important feature cross combinations
and reducing the weight of featureless crosses. To verify that the FiiNet model
can dynamically learn the importance of feature interaction combinations in a
fine-grained manner and improve the model's recommendation performance and
interpretability, this paper compares it with many click-through rate
prediction models on two real datasets, proving that the FiiNet model
incorporating the selective kernel network can effectively improve the
recommendation effect and provide better interpretability. FiiNet model
implementations are available in PyTorch.
|
[
{
"created": "Tue, 14 May 2024 16:05:57 GMT",
"version": "v1"
}
] |
2024-05-16
|
[
[
"Wang",
"Hao",
""
],
[
"Li",
"Nao",
""
]
] |
Most current click-through rate prediction(CTR)models create explicit or implicit high-order feature crosses through Hadamard product or inner product, with little attention to the importance of feature crossing; only few models are either limited to the second-order explicit feature crossing, implicitly to high-order feature crossing, or can learn the importance of high-order explicit feature crossing but fail to provide good interpretability for the model. This paper proposes a new model, FiiNet (Multiple Order Feature Interaction Importance Neural Networks). The model first uses the selective kernel network (SKNet) to explicitly construct multi-order feature crosses. It dynamically learns the importance of feature interaction combinations in a fine grained manner, increasing the attention weight of important feature cross combinations and reducing the weight of featureless crosses. To verify that the FiiNet model can dynamically learn the importance of feature interaction combinations in a fine-grained manner and improve the model's recommendation performance and interpretability, this paper compares it with many click-through rate prediction models on two real datasets, proving that the FiiNet model incorporating the selective kernel network can effectively improve the recommendation effect and provide better interpretability. FiiNet model implementations are available in PyTorch.
|
0905.0315
|
Lahatra Rakotondrainibe
|
Lahatra Rakotondrainibe (IETR), Yvan Kokar (IETR), Gheorghe Zaharia
(IETR), Gha\"is El Zein (IETR)
|
Millimeter-Wave System for High Data Rate Indoor Communications
|
5 pages
|
ISSCS 2009, Iasi : Roumanie (2009)
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the realization of a wireless Gigabit Ethernet
communication system operating in the 60 GHz band. The system architecture uses
a single carrier modulation. A differential encoded binary phase shift keying
modulation and a differential demodulation scheme are adopted for the
intermediate frequency blocks. The baseband blocks use Reed- Solomon RS (255,
239) coding and decoding for channel forward error correction (FEC). First
results of bit error rate (BER) measurements at 875 Mbps, without channel
coding, are presented for different antennas.
|
[
{
"created": "Mon, 4 May 2009 07:20:31 GMT",
"version": "v1"
}
] |
2010-11-10
|
[
[
"Rakotondrainibe",
"Lahatra",
"",
"IETR"
],
[
"Kokar",
"Yvan",
"",
"IETR"
],
[
"Zaharia",
"Gheorghe",
"",
"IETR"
],
[
"Zein",
"Ghaïs El",
"",
"IETR"
]
] |
This paper presents the realization of a wireless Gigabit Ethernet communication system operating in the 60 GHz band. The system architecture uses a single carrier modulation. A differential encoded binary phase shift keying modulation and a differential demodulation scheme are adopted for the intermediate frequency blocks. The baseband blocks use Reed- Solomon RS (255, 239) coding and decoding for channel forward error correction (FEC). First results of bit error rate (BER) measurements at 875 Mbps, without channel coding, are presented for different antennas.
|
2205.04410
|
Sayan Biswas
|
Sayan Biswas, Kangsoo Jung, Catuscia Palamidessi
|
Tight Differential Privacy Blanket for Shuffle Model
|
Extended Abstract
| null |
10.1049/icp.2022.2041
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
With the recent bloom of focus on digital economy, the importance of personal
data has seen a massive surge of late. Keeping pace with this trend, the model
of data market is starting to emerge as a process to obtain high-quality
personal information in exchange of incentives. To have a formal guarantee to
protect the privacy of the sensitive data involved in digital economy,
\emph{differential privacy (DP)} is the go-to technique, which has gained a lot
of attention by the community recently. However, it is essential to optimize
the privacy-utility trade-off by ensuring the highest level of privacy
protection is ensured while preserving the utility of the data. In this paper,
we theoretically derive sufficient and necessary conditions to have tight
$(\epsilon,\,\delta)$-DP blankets for the shuffle model, which, to the best of
our knowledge, have not been proven before, and, thus, characterize the best
possible DP protection for shuffle models which can be implemented in data
markets to ensure privacy-preserving trading of digital economy.
|
[
{
"created": "Mon, 9 May 2022 16:35:54 GMT",
"version": "v1"
}
] |
2022-11-11
|
[
[
"Biswas",
"Sayan",
""
],
[
"Jung",
"Kangsoo",
""
],
[
"Palamidessi",
"Catuscia",
""
]
] |
With the recent bloom of focus on digital economy, the importance of personal data has seen a massive surge of late. Keeping pace with this trend, the model of data market is starting to emerge as a process to obtain high-quality personal information in exchange of incentives. To have a formal guarantee to protect the privacy of the sensitive data involved in digital economy, \emph{differential privacy (DP)} is the go-to technique, which has gained a lot of attention by the community recently. However, it is essential to optimize the privacy-utility trade-off by ensuring the highest level of privacy protection is ensured while preserving the utility of the data. In this paper, we theoretically derive sufficient and necessary conditions to have tight $(\epsilon,\,\delta)$-DP blankets for the shuffle model, which, to the best of our knowledge, have not been proven before, and, thus, characterize the best possible DP protection for shuffle models which can be implemented in data markets to ensure privacy-preserving trading of digital economy.
|
2206.14719
|
Zifeng Wang
|
Zifeng Wang and Jimeng Sun
|
Trial2Vec: Zero-Shot Clinical Trial Document Similarity Search using
Self-Supervision
|
Findings of EMNLP 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical trials are essential for drug development but are extremely
expensive and time-consuming to conduct. It is beneficial to study similar
historical trials when designing a clinical trial. However, lengthy trial
documents and lack of labeled data make trial similarity search difficult. We
propose a zero-shot clinical trial retrieval method, Trial2Vec, which learns
through self-supervision without annotating similar clinical trials.
Specifically, the meta-structure of trial documents (e.g., title, eligibility
criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge
base https://www.nlm.nih.gov/research/umls/index.html) are leveraged to
automatically generate contrastive samples. Besides, Trial2Vec encodes trial
documents considering meta-structure thus producing compact embeddings
aggregating multi-aspect information from the whole document. We show that our
method yields medically interpretable embeddings by visualization and it gets a
15% average improvement over the best baselines on precision/recall for trial
retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we
prove the pre-trained embeddings benefit the downstream trial outcome
prediction task over 240k trials. Software ias available at
https://github.com/RyanWangZf/Trial2Vec.
|
[
{
"created": "Wed, 29 Jun 2022 15:37:11 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Oct 2022 19:43:16 GMT",
"version": "v2"
}
] |
2022-10-11
|
[
[
"Wang",
"Zifeng",
""
],
[
"Sun",
"Jimeng",
""
]
] |
Clinical trials are essential for drug development but are extremely expensive and time-consuming to conduct. It is beneficial to study similar historical trials when designing a clinical trial. However, lengthy trial documents and lack of labeled data make trial similarity search difficult. We propose a zero-shot clinical trial retrieval method, Trial2Vec, which learns through self-supervision without annotating similar clinical trials. Specifically, the meta-structure of trial documents (e.g., title, eligibility criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge base https://www.nlm.nih.gov/research/umls/index.html) are leveraged to automatically generate contrastive samples. Besides, Trial2Vec encodes trial documents considering meta-structure thus producing compact embeddings aggregating multi-aspect information from the whole document. We show that our method yields medically interpretable embeddings by visualization and it gets a 15% average improvement over the best baselines on precision/recall for trial retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we prove the pre-trained embeddings benefit the downstream trial outcome prediction task over 240k trials. Software ias available at https://github.com/RyanWangZf/Trial2Vec.
|
1609.06204
|
Alessio Palmero Aprosio
|
Alessio Palmero Aprosio and Giovanni Moretti
|
Italy goes to Stanford: a collection of CoreNLP modules for Italian
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this we paper present Tint, an easy-to-use set of fast, accurate and
extendable Natural Language Processing modules for Italian. It is based on
Stanford CoreNLP and is freely available as a standalone software or a library
that can be integrated in an existing project.
|
[
{
"created": "Tue, 20 Sep 2016 14:53:05 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Apr 2017 08:33:33 GMT",
"version": "v2"
}
] |
2017-04-14
|
[
[
"Aprosio",
"Alessio Palmero",
""
],
[
"Moretti",
"Giovanni",
""
]
] |
In this we paper present Tint, an easy-to-use set of fast, accurate and extendable Natural Language Processing modules for Italian. It is based on Stanford CoreNLP and is freely available as a standalone software or a library that can be integrated in an existing project.
|
1805.02008
|
Xu Guo
|
Chang Liu, Yichao Zhu, Zhi Sun, Dingding Li, Zongliang Du, Weisheng
Zhang, Xu Guo
|
An efficient Moving Morphable Component (MMC)-based approach for
multi-resolution topology optimization
| null |
Structural and Multidisciplinary Optimization (2018) 58: 2455
|
10.1007/s00158-018-2114-0
| null |
cs.CE cond-mat.mtrl-sci
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present work, a highly efficient Moving Morphable Component (MMC)
based approach for multi-resolution topology optimization is proposed. In this
approach, high-resolution optimization results can be obtained with much less
number of degrees of freedoms (DOFs) and design variables since the finite
element analysis model and the design optimization model are totally decoupled
in the MMC-based problem formulation. This is achieved by introducing
super-elements for structural response analysis and adopting a domain
decomposition strategy to preserve the topology complexity of optimized
structures. Both two-and three-dimensional numerical results demonstrate that
substantial computational efforts can be saved with use of the proposed
approach.
|
[
{
"created": "Sat, 5 May 2018 05:38:53 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Jul 2018 04:56:17 GMT",
"version": "v2"
}
] |
2018-12-10
|
[
[
"Liu",
"Chang",
""
],
[
"Zhu",
"Yichao",
""
],
[
"Sun",
"Zhi",
""
],
[
"Li",
"Dingding",
""
],
[
"Du",
"Zongliang",
""
],
[
"Zhang",
"Weisheng",
""
],
[
"Guo",
"Xu",
""
]
] |
In the present work, a highly efficient Moving Morphable Component (MMC) based approach for multi-resolution topology optimization is proposed. In this approach, high-resolution optimization results can be obtained with much less number of degrees of freedoms (DOFs) and design variables since the finite element analysis model and the design optimization model are totally decoupled in the MMC-based problem formulation. This is achieved by introducing super-elements for structural response analysis and adopting a domain decomposition strategy to preserve the topology complexity of optimized structures. Both two-and three-dimensional numerical results demonstrate that substantial computational efforts can be saved with use of the proposed approach.
|
1904.02141
|
Yuying Zhu
|
Yuying Zhu, Guoxin Wang, B\"orje F. Karlsson
|
CAN-NER: Convolutional Attention Network for Chinese Named Entity
Recognition
|
This paper is accepted by NAACL-HLT 2019. The code is available at
https://github.com/microsoft/vert-papers/tree/master/papers/CAN-NER
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Named entity recognition (NER) in Chinese is essential but difficult because
of the lack of natural delimiters. Therefore, Chinese Word Segmentation (CWS)
is usually considered as the first step for Chinese NER. However, models based
on word-level embeddings and lexicon features often suffer from segmentation
errors and out-of-vocabulary (OOV) words. In this paper, we investigate a
Convolutional Attention Network called CAN for Chinese NER, which consists of a
character-based convolutional neural network (CNN) with local-attention layer
and a gated recurrent unit (GRU) with global self-attention layer to capture
the information from adjacent characters and sentence contexts. Also, compared
to other models, not depending on any external resources like lexicons and
employing small size of char embeddings make our model more practical.
Extensive experimental results show that our approach outperforms
state-of-the-art methods without word embedding and external lexicon resources
on different domain datasets including Weibo, MSRA and Chinese Resume NER
dataset.
|
[
{
"created": "Wed, 3 Apr 2019 17:56:38 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2019 08:10:55 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Jul 2020 14:10:33 GMT",
"version": "v3"
}
] |
2020-07-16
|
[
[
"Zhu",
"Yuying",
""
],
[
"Wang",
"Guoxin",
""
],
[
"Karlsson",
"Börje F.",
""
]
] |
Named entity recognition (NER) in Chinese is essential but difficult because of the lack of natural delimiters. Therefore, Chinese Word Segmentation (CWS) is usually considered as the first step for Chinese NER. However, models based on word-level embeddings and lexicon features often suffer from segmentation errors and out-of-vocabulary (OOV) words. In this paper, we investigate a Convolutional Attention Network called CAN for Chinese NER, which consists of a character-based convolutional neural network (CNN) with local-attention layer and a gated recurrent unit (GRU) with global self-attention layer to capture the information from adjacent characters and sentence contexts. Also, compared to other models, not depending on any external resources like lexicons and employing small size of char embeddings make our model more practical. Extensive experimental results show that our approach outperforms state-of-the-art methods without word embedding and external lexicon resources on different domain datasets including Weibo, MSRA and Chinese Resume NER dataset.
|
1309.5316
|
Brigitte Charnomordic
|
Aur\'elie Th\'ebaut (MISTEA), Thibault Scholash, Brigitte Charnomordic
(MISTEA), Nadine Hilgert (MISTEA)
|
A modeling approach to design a software sensor and analyze agronomical
features - Application to sap flow and grape quality relationship
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes a framework using temporal data and domain knowledge in
order to analyze complex agronomical features. The expertise is first
formalized in an ontology, under the form of concepts and relationships between
them, and then used in conjunction with raw data and mathematical models to
design a software sensor. Next the software sensor outputs are put in relation
to product quality, assessed by quantitative measurements. This requires the
use of advanced data analysis methods, such as functional regression. The
methodology is applied to a case study involving an experimental design in
French vineyards. The temporal data consist of sap flow measurements, and the
goal is to explain fruit quality (sugar concentration and weight), using vine's
water courses through the various vine phenological stages. The results are
discussed, as well as the method genericity and robustness.
|
[
{
"created": "Fri, 20 Sep 2013 16:41:43 GMT",
"version": "v1"
}
] |
2013-09-23
|
[
[
"Thébaut",
"Aurélie",
"",
"MISTEA"
],
[
"Scholash",
"Thibault",
"",
"MISTEA"
],
[
"Charnomordic",
"Brigitte",
"",
"MISTEA"
],
[
"Hilgert",
"Nadine",
"",
"MISTEA"
]
] |
This work proposes a framework using temporal data and domain knowledge in order to analyze complex agronomical features. The expertise is first formalized in an ontology, under the form of concepts and relationships between them, and then used in conjunction with raw data and mathematical models to design a software sensor. Next the software sensor outputs are put in relation to product quality, assessed by quantitative measurements. This requires the use of advanced data analysis methods, such as functional regression. The methodology is applied to a case study involving an experimental design in French vineyards. The temporal data consist of sap flow measurements, and the goal is to explain fruit quality (sugar concentration and weight), using vine's water courses through the various vine phenological stages. The results are discussed, as well as the method genericity and robustness.
|
1909.13708
|
Rog\'erio De Lemos
|
Lionel Montrieux, Rogerio de Lemos, Chris Bailey
|
Engineering Self-adaptive Authorisation Infrastructures
|
A shorter version of the this paper appeared in: Montrieux L., de
Lemos R., Bailey C. (2019) Challenges in Engineering Self-Adaptive
Authorisation Infrastructures. In: Yu Y. et al. (eds) Engineering Adaptive
Software Systems. Springer, Singapore
| null |
10.1007/978-981-13-2185-6_3
| null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As organisations expand and interconnect, authorisation infrastructures
become increasingly difficult to manage. Several solutions have been proposed,
including self-adaptive authorisation, where the access control policies are
dynamically adapted at run-time to respond to misuse and malicious behaviour.
The ultimate goal of self-adaptive authorisation is to reduce human
intervention, make authorisation infrastructures more responsive to malicious
behaviour, and manage access control in a more cost effective way. In this
paper, we scope and define the emerging area of self-adaptive authorisation by
describing some of its developments, trends and challenges. For that, we start
by identifying key concepts related to access control and authorisation
infrastructures, and provide a brief introduction to self-adaptive software
systems, which provides the foundation for investigating how self-adaptation
can enable the enforcement of authorisation policies. The outcome of this study
is the identification of several technical challenges related to self-adaptive
authorisation, which are classified according to the different stages of a
feedback control loop.
|
[
{
"created": "Mon, 30 Sep 2019 13:59:09 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Montrieux",
"Lionel",
""
],
[
"de Lemos",
"Rogerio",
""
],
[
"Bailey",
"Chris",
""
]
] |
As organisations expand and interconnect, authorisation infrastructures become increasingly difficult to manage. Several solutions have been proposed, including self-adaptive authorisation, where the access control policies are dynamically adapted at run-time to respond to misuse and malicious behaviour. The ultimate goal of self-adaptive authorisation is to reduce human intervention, make authorisation infrastructures more responsive to malicious behaviour, and manage access control in a more cost effective way. In this paper, we scope and define the emerging area of self-adaptive authorisation by describing some of its developments, trends and challenges. For that, we start by identifying key concepts related to access control and authorisation infrastructures, and provide a brief introduction to self-adaptive software systems, which provides the foundation for investigating how self-adaptation can enable the enforcement of authorisation policies. The outcome of this study is the identification of several technical challenges related to self-adaptive authorisation, which are classified according to the different stages of a feedback control loop.
|
2304.04005
|
Seyed Mohammad Hosien Abedy Nejad
|
Seyed Mohammad Hossein Abedy Nejad, Mohammad Amin Behzadi, Abdolrahim
Taheri
|
A new transformation for embedded convolutional neural network approach
toward real-time servo motor overload fault-detection
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Overloading in DC servo motors is a major concern in industries, as many
companies face the problem of finding expert operators, and also human
monitoring may not be an effective solution. Therefore, this paper proposed an
embedded Artificial intelligence (AI) approach using a Convolutional Neural
Network (CNN) using a new transformation to extract faults from real-time input
signals without human interference. Our main purpose is to extract as many as
possible features from the input signal to achieve a relaxed dataset that
results in an effective but compact network to provide real-time fault
detection even in a low-memory microcontroller. Besides, fault detection method
a synchronous dual-motor system is also proposed to take action in faulty
events. To fulfill this intention, a one-dimensional input signal from the
output current of each DC servo motor is monitored and transformed into a 3d
stack of data and then the CNN is implemented into the processor to detect any
fault corresponding to overloading, finally experimental setup results in
99.9997% accuracy during testing for a model with nearly 8000 parameters. In
addition, the proposed dual-motor system could achieve overload reduction and
provide a fault-tolerant system and it is shown that this system also takes
advantage of less energy consumption.
|
[
{
"created": "Sat, 8 Apr 2023 13:36:33 GMT",
"version": "v1"
}
] |
2023-04-11
|
[
[
"Nejad",
"Seyed Mohammad Hossein Abedy",
""
],
[
"Behzadi",
"Mohammad Amin",
""
],
[
"Taheri",
"Abdolrahim",
""
]
] |
Overloading in DC servo motors is a major concern in industries, as many companies face the problem of finding expert operators, and also human monitoring may not be an effective solution. Therefore, this paper proposed an embedded Artificial intelligence (AI) approach using a Convolutional Neural Network (CNN) using a new transformation to extract faults from real-time input signals without human interference. Our main purpose is to extract as many as possible features from the input signal to achieve a relaxed dataset that results in an effective but compact network to provide real-time fault detection even in a low-memory microcontroller. Besides, fault detection method a synchronous dual-motor system is also proposed to take action in faulty events. To fulfill this intention, a one-dimensional input signal from the output current of each DC servo motor is monitored and transformed into a 3d stack of data and then the CNN is implemented into the processor to detect any fault corresponding to overloading, finally experimental setup results in 99.9997% accuracy during testing for a model with nearly 8000 parameters. In addition, the proposed dual-motor system could achieve overload reduction and provide a fault-tolerant system and it is shown that this system also takes advantage of less energy consumption.
|
2005.07959
|
Benedek Rozemberczki
|
Benedek Rozemberczki and Rik Sarkar
|
Characteristic Functions on Graphs: Birds of a Feather, from Statistical
Descriptors to Parametric Models
|
Source code is available at:
https://github.com/benedekrozemberczki/FEATHER
|
CIKM 2020
| null | null |
cs.LG cs.DM cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a flexible notion of characteristic functions
defined on graph vertices to describe the distribution of vertex features at
multiple scales. We introduce FEATHER, a computationally efficient algorithm to
calculate a specific variant of these characteristic functions where the
probability weights of the characteristic function are defined as the
transition probabilities of random walks. We argue that features extracted by
this procedure are useful for node level machine learning tasks. We discuss the
pooling of these node representations, resulting in compact descriptors of
graphs that can serve as features for graph classification algorithms. We
analytically prove that FEATHER describes isomorphic graphs with the same
representation and exhibits robustness to data corruption. Using the node
feature characteristic functions we define parametric models where evaluation
points of the functions are learned parameters of supervised classifiers.
Experiments on real world large datasets show that our proposed algorithm
creates high quality representations, performs transfer learning efficiently,
exhibits robustness to hyperparameter changes, and scales linearly with the
input size.
|
[
{
"created": "Sat, 16 May 2020 11:47:05 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Aug 2020 16:21:09 GMT",
"version": "v2"
}
] |
2020-08-18
|
[
[
"Rozemberczki",
"Benedek",
""
],
[
"Sarkar",
"Rik",
""
]
] |
In this paper, we propose a flexible notion of characteristic functions defined on graph vertices to describe the distribution of vertex features at multiple scales. We introduce FEATHER, a computationally efficient algorithm to calculate a specific variant of these characteristic functions where the probability weights of the characteristic function are defined as the transition probabilities of random walks. We argue that features extracted by this procedure are useful for node level machine learning tasks. We discuss the pooling of these node representations, resulting in compact descriptors of graphs that can serve as features for graph classification algorithms. We analytically prove that FEATHER describes isomorphic graphs with the same representation and exhibits robustness to data corruption. Using the node feature characteristic functions we define parametric models where evaluation points of the functions are learned parameters of supervised classifiers. Experiments on real world large datasets show that our proposed algorithm creates high quality representations, performs transfer learning efficiently, exhibits robustness to hyperparameter changes, and scales linearly with the input size.
|
1809.10636
|
Anoop Toffy
|
Chae Young Lee, Anoop Toffy, Gue Jun Jung, Woo-Jin Han
|
Conditional WaveGAN
|
Preprint
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Generative models are successfully used for image synthesis in the recent
years. But when it comes to other modalities like audio, text etc little
progress has been made. Recent works focus on generating audio from a
generative model in an unsupervised setting. We explore the possibility of
using generative models conditioned on class labels. Concatenation based
conditioning and conditional scaling were explored in this work with various
hyper-parameter tuning methods. In this paper we introduce Conditional WaveGANs
(cWaveGAN). Find our implementation at https://github.com/acheketa/cwavegan
|
[
{
"created": "Thu, 27 Sep 2018 16:56:23 GMT",
"version": "v1"
}
] |
2018-09-30
|
[
[
"Lee",
"Chae Young",
""
],
[
"Toffy",
"Anoop",
""
],
[
"Jung",
"Gue Jun",
""
],
[
"Han",
"Woo-Jin",
""
]
] |
Generative models are successfully used for image synthesis in the recent years. But when it comes to other modalities like audio, text etc little progress has been made. Recent works focus on generating audio from a generative model in an unsupervised setting. We explore the possibility of using generative models conditioned on class labels. Concatenation based conditioning and conditional scaling were explored in this work with various hyper-parameter tuning methods. In this paper we introduce Conditional WaveGANs (cWaveGAN). Find our implementation at https://github.com/acheketa/cwavegan
|
1501.01829
|
Or Ordentlich
|
Or Ordentlich and Uri Erez
|
Performance Analysis and Optimal Filter Design for Sigma-Delta
Modulation via Duality with DPCM
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sampling above the Nyquist rate is at the heart of sigma-delta modulation,
where the increase in sampling rate is translated to a reduction in the overall
(mean-squared-error) reconstruction distortion. This is attained by using a
feedback filter at the encoder, in conjunction with a low-pass filter at the
decoder. The goal of this work is to characterize the optimal trade-off between
the per-sample quantization rate and the resulting mean-squared-error
distortion, under various restrictions on the feedback filter. To this end, we
establish a duality relation between the performance of sigma-delta modulation,
and that of differential pulse-code modulation when applied to (discrete-time)
band-limited inputs. As the optimal trade-off for the latter scheme is fully
understood, the full characterization for sigma-delta modulation, as well as
the optimal feedback filters, immediately follow.
|
[
{
"created": "Thu, 8 Jan 2015 13:14:40 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jun 2015 17:10:44 GMT",
"version": "v2"
}
] |
2015-06-10
|
[
[
"Ordentlich",
"Or",
""
],
[
"Erez",
"Uri",
""
]
] |
Sampling above the Nyquist rate is at the heart of sigma-delta modulation, where the increase in sampling rate is translated to a reduction in the overall (mean-squared-error) reconstruction distortion. This is attained by using a feedback filter at the encoder, in conjunction with a low-pass filter at the decoder. The goal of this work is to characterize the optimal trade-off between the per-sample quantization rate and the resulting mean-squared-error distortion, under various restrictions on the feedback filter. To this end, we establish a duality relation between the performance of sigma-delta modulation, and that of differential pulse-code modulation when applied to (discrete-time) band-limited inputs. As the optimal trade-off for the latter scheme is fully understood, the full characterization for sigma-delta modulation, as well as the optimal feedback filters, immediately follow.
|
2009.14737
|
Keyu Tian
|
Keyu Tian, Chen Lin, Ming Sun, Luping Zhou, Junjie Yan, Wanli Ouyang
|
Improving Auto-Augment via Augmentation-Wise Weight Sharing
|
Accepted to NeurIPS 2020 (Poster)
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent progress on automatically searching augmentation policies has
boosted the performance substantially for various tasks. A key component of
automatic augmentation search is the evaluation process for a particular
augmentation policy, which is utilized to return reward and usually runs
thousands of times. A plain evaluation process, which includes full model
training and validation, would be time-consuming. To achieve efficiency, many
choose to sacrifice evaluation reliability for speed. In this paper, we dive
into the dynamics of augmented training of the model. This inspires us to
design a powerful and efficient proxy task based on the Augmentation-Wise
Weight Sharing (AWS) to form a fast yet accurate evaluation process in an
elegant way. Comprehensive analysis verifies the superiority of this approach
in terms of effectiveness and efficiency. The augmentation policies found by
our method achieve superior accuracies compared with existing auto-augmentation
search methods. On CIFAR-10, we achieve a top-1 error rate of 1.24%, which is
currently the best performing single model without extra training data. On
ImageNet, we get a top-1 error rate of 20.36% for ResNet-50, which leads to
3.34% absolute error rate reduction over the baseline augmentation.
|
[
{
"created": "Wed, 30 Sep 2020 15:23:12 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Oct 2020 15:12:47 GMT",
"version": "v2"
}
] |
2020-10-23
|
[
[
"Tian",
"Keyu",
""
],
[
"Lin",
"Chen",
""
],
[
"Sun",
"Ming",
""
],
[
"Zhou",
"Luping",
""
],
[
"Yan",
"Junjie",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
The recent progress on automatically searching augmentation policies has boosted the performance substantially for various tasks. A key component of automatic augmentation search is the evaluation process for a particular augmentation policy, which is utilized to return reward and usually runs thousands of times. A plain evaluation process, which includes full model training and validation, would be time-consuming. To achieve efficiency, many choose to sacrifice evaluation reliability for speed. In this paper, we dive into the dynamics of augmented training of the model. This inspires us to design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process in an elegant way. Comprehensive analysis verifies the superiority of this approach in terms of effectiveness and efficiency. The augmentation policies found by our method achieve superior accuracies compared with existing auto-augmentation search methods. On CIFAR-10, we achieve a top-1 error rate of 1.24%, which is currently the best performing single model without extra training data. On ImageNet, we get a top-1 error rate of 20.36% for ResNet-50, which leads to 3.34% absolute error rate reduction over the baseline augmentation.
|
2104.03071
|
Vijit Malik
|
Aditya Jindal, Ankur Gupta, Jaya Srivastava, Preeti Menghwani, Vijit
Malik, Vishesh Kaushik, Ashutosh Modi
|
BreakingBERT@IITK at SemEval-2021 Task 9 : Statement Verification and
Evidence Finding with Tables
|
Accepted at SemEval 2021 Task 9, 11 Pages (8 Pages main content+ 1
pages for references + 2 Pages Appendix)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, there has been an interest in factual verification and prediction
over structured data like tables and graphs. To circumvent any false news
incident, it is necessary to not only model and predict over structured data
efficiently but also to explain those predictions. In this paper, as part of
the SemEval-2021 Task 9, we tackle the problem of fact verification and
evidence finding over tabular data. There are two subtasks. Given a table and a
statement/fact, subtask A determines whether the statement is inferred from the
tabular data, and subtask B determines which cells in the table provide
evidence for the former subtask. We make a comparison of the baselines and
state-of-the-art approaches over the given SemTabFact dataset. We also propose
a novel approach CellBERT to solve evidence finding as a form of the Natural
Language Inference task. We obtain a 3-way F1 score of 0.69 on subtask A and an
F1 score of 0.65 on subtask B.
|
[
{
"created": "Wed, 7 Apr 2021 11:41:07 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Apr 2021 10:08:47 GMT",
"version": "v2"
}
] |
2021-04-13
|
[
[
"Jindal",
"Aditya",
""
],
[
"Gupta",
"Ankur",
""
],
[
"Srivastava",
"Jaya",
""
],
[
"Menghwani",
"Preeti",
""
],
[
"Malik",
"Vijit",
""
],
[
"Kaushik",
"Vishesh",
""
],
[
"Modi",
"Ashutosh",
""
]
] |
Recently, there has been an interest in factual verification and prediction over structured data like tables and graphs. To circumvent any false news incident, it is necessary to not only model and predict over structured data efficiently but also to explain those predictions. In this paper, as part of the SemEval-2021 Task 9, we tackle the problem of fact verification and evidence finding over tabular data. There are two subtasks. Given a table and a statement/fact, subtask A determines whether the statement is inferred from the tabular data, and subtask B determines which cells in the table provide evidence for the former subtask. We make a comparison of the baselines and state-of-the-art approaches over the given SemTabFact dataset. We also propose a novel approach CellBERT to solve evidence finding as a form of the Natural Language Inference task. We obtain a 3-way F1 score of 0.69 on subtask A and an F1 score of 0.65 on subtask B.
|
2408.07614
|
Sergei Vassilvitskii
|
Kareem Amin, Alex Kulesza, Sergei Vassilvitskii
|
Practical Considerations for Differential Privacy
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Differential privacy is the gold standard for statistical data release. Used
by governments, companies, and academics, its mathematically rigorous
guarantees and worst-case assumptions on the strength and knowledge of
attackers make it a robust and compelling framework for reasoning about
privacy. However, even with landmark successes, differential privacy has not
achieved widespread adoption in everyday data use and data protection. In this
work we examine some of the practical obstacles that stand in the way.
|
[
{
"created": "Wed, 14 Aug 2024 15:28:28 GMT",
"version": "v1"
}
] |
2024-08-15
|
[
[
"Amin",
"Kareem",
""
],
[
"Kulesza",
"Alex",
""
],
[
"Vassilvitskii",
"Sergei",
""
]
] |
Differential privacy is the gold standard for statistical data release. Used by governments, companies, and academics, its mathematically rigorous guarantees and worst-case assumptions on the strength and knowledge of attackers make it a robust and compelling framework for reasoning about privacy. However, even with landmark successes, differential privacy has not achieved widespread adoption in everyday data use and data protection. In this work we examine some of the practical obstacles that stand in the way.
|
2311.18402
|
Xinwei Fu
|
Dan Song, Xinwei Fu, Weizhi Nie, Wenhui Li, Lanjun Wang, You Yang,
Anan Liu
|
MV-CLIP: Multi-View CLIP for Zero-shot 3D Shape Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale pre-trained models have demonstrated impressive performance in
vision and language tasks within open-world scenarios. Due to the lack of
comparable pre-trained models for 3D shapes, recent methods utilize
language-image pre-training to realize zero-shot 3D shape recognition. However,
due to the modality gap, pretrained language-image models are not confident
enough in the generalization to 3D shape recognition. Consequently, this paper
aims to improve the confidence with view selection and hierarchical prompts.
Leveraging the CLIP model as an example, we employ view selection on the vision
side by identifying views with high prediction confidence from multiple
rendered views of a 3D shape. On the textual side, the strategy of hierarchical
prompts is proposed for the first time. The first layer prompts several
classification candidates with traditional class-level descriptions, while the
second layer refines the prediction based on function-level descriptions or
further distinctions between the candidates. Remarkably, without the need for
additional training, our proposed method achieves impressive zero-shot 3D
classification accuracies of 84.44%, 91.51%, and 66.17% on ModelNet40,
ModelNet10, and ShapeNet Core55, respectively. Furthermore, we will make the
code publicly available to facilitate reproducibility and further research in
this area.
|
[
{
"created": "Thu, 30 Nov 2023 09:51:53 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Apr 2024 08:57:35 GMT",
"version": "v2"
}
] |
2024-04-18
|
[
[
"Song",
"Dan",
""
],
[
"Fu",
"Xinwei",
""
],
[
"Nie",
"Weizhi",
""
],
[
"Li",
"Wenhui",
""
],
[
"Wang",
"Lanjun",
""
],
[
"Yang",
"You",
""
],
[
"Liu",
"Anan",
""
]
] |
Large-scale pre-trained models have demonstrated impressive performance in vision and language tasks within open-world scenarios. Due to the lack of comparable pre-trained models for 3D shapes, recent methods utilize language-image pre-training to realize zero-shot 3D shape recognition. However, due to the modality gap, pretrained language-image models are not confident enough in the generalization to 3D shape recognition. Consequently, this paper aims to improve the confidence with view selection and hierarchical prompts. Leveraging the CLIP model as an example, we employ view selection on the vision side by identifying views with high prediction confidence from multiple rendered views of a 3D shape. On the textual side, the strategy of hierarchical prompts is proposed for the first time. The first layer prompts several classification candidates with traditional class-level descriptions, while the second layer refines the prediction based on function-level descriptions or further distinctions between the candidates. Remarkably, without the need for additional training, our proposed method achieves impressive zero-shot 3D classification accuracies of 84.44%, 91.51%, and 66.17% on ModelNet40, ModelNet10, and ShapeNet Core55, respectively. Furthermore, we will make the code publicly available to facilitate reproducibility and further research in this area.
|
1902.08915
|
Yiwei Zhang
|
Yiwei Zhang, Chunbiao Zhu, Ge Li, Yuan Zhao, Haifeng Shen
|
Bi-Skip: A Motion Deblurring Network Using Self-paced Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fast and effective motion deblurring method has great application values in
real life. This work presents an innovative approach in which a self-paced
learning is combined with GAN to deblur image. First, We explain that a proper
generator can be used as deep priors and point out that the solution for
pixel-based loss is not same with the one for perception-based loss. By using
these ideas as starting points, a Bi-Skip network is proposed to improve the
generating ability and a bi-level loss is adopted to solve the problem that
common conditions are non-identical. Second, considering that the complex
motion blur will perturb the network in the training process, a self-paced
mechanism is adopted to enhance the robustness of the network. Through
extensive evaluations on both qualitative and quantitative criteria, it is
demonstrated that our approach has a competitive advantage over
state-of-the-art methods.
|
[
{
"created": "Sun, 24 Feb 2019 10:28:04 GMT",
"version": "v1"
}
] |
2019-02-26
|
[
[
"Zhang",
"Yiwei",
""
],
[
"Zhu",
"Chunbiao",
""
],
[
"Li",
"Ge",
""
],
[
"Zhao",
"Yuan",
""
],
[
"Shen",
"Haifeng",
""
]
] |
A fast and effective motion deblurring method has great application values in real life. This work presents an innovative approach in which a self-paced learning is combined with GAN to deblur image. First, We explain that a proper generator can be used as deep priors and point out that the solution for pixel-based loss is not same with the one for perception-based loss. By using these ideas as starting points, a Bi-Skip network is proposed to improve the generating ability and a bi-level loss is adopted to solve the problem that common conditions are non-identical. Second, considering that the complex motion blur will perturb the network in the training process, a self-paced mechanism is adopted to enhance the robustness of the network. Through extensive evaluations on both qualitative and quantitative criteria, it is demonstrated that our approach has a competitive advantage over state-of-the-art methods.
|
1801.04510
|
Jia Wu
|
Chenglong Dai, Jia Wu, Dechang Pi, Lin Cui
|
Brain EEG Time Series Selection: A Novel Graph-Based Approach for
Classification
|
9 pages, 5 figures, Accepted by SDM-2018
| null | null | null |
cs.LG q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain Electroencephalography (EEG) classification is widely applied to
analyze cerebral diseases in recent years. Unfortunately, invalid/noisy EEGs
degrade the diagnosis performance and most previously developed methods ignore
the necessity of EEG selection for classification. To this end, this paper
proposes a novel maximum weight clique-based EEG selection approach, named
mwcEEGs, to map EEG selection to searching maximum similarity-weighted cliques
from an improved Fr\'{e}chet distance-weighted undirected EEG graph
simultaneously considering edge weights and vertex weights. Our mwcEEGs
improves the classification performance by selecting intra-clique pairwise
similar and inter-clique discriminative EEGs with similarity threshold
$\delta$. Experimental results demonstrate the algorithm effectiveness compared
with the state-of-the-art time series selection algorithms on real-world EEG
datasets.
|
[
{
"created": "Sun, 14 Jan 2018 04:51:22 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Feb 2018 06:19:21 GMT",
"version": "v2"
}
] |
2018-02-12
|
[
[
"Dai",
"Chenglong",
""
],
[
"Wu",
"Jia",
""
],
[
"Pi",
"Dechang",
""
],
[
"Cui",
"Lin",
""
]
] |
Brain Electroencephalography (EEG) classification is widely applied to analyze cerebral diseases in recent years. Unfortunately, invalid/noisy EEGs degrade the diagnosis performance and most previously developed methods ignore the necessity of EEG selection for classification. To this end, this paper proposes a novel maximum weight clique-based EEG selection approach, named mwcEEGs, to map EEG selection to searching maximum similarity-weighted cliques from an improved Fr\'{e}chet distance-weighted undirected EEG graph simultaneously considering edge weights and vertex weights. Our mwcEEGs improves the classification performance by selecting intra-clique pairwise similar and inter-clique discriminative EEGs with similarity threshold $\delta$. Experimental results demonstrate the algorithm effectiveness compared with the state-of-the-art time series selection algorithms on real-world EEG datasets.
|
1203.0439
|
Pierre de Leusse
|
Pierre de Leusse, Panos Periorellis, Theo Dimitrakos and Srijith K.
Nair
|
Self Managed Security Cell, a security model for the Internet of Things
and Services
| null |
The First International Conference on Advances in Future Internet,
AFIN 2009, IEEE Computer Society, June 18-23, 2009, Athens/Vouliagmeni,
Greece, Best paper award
|
10.1109/AFIN.2009.15
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Things and Services is a rapidly growing concept that
illustrates that the ever increasing amount of physical items of our daily life
which become addressable through a network could be made more easily manageable
and usable through the use of Services. This surge of exposed resources along
with the level of privacy and value of the information they hold, together with
the increase of their usage make for an augmentation in the number of the
security threats and violation attempts that existing security systems do not
appear robust enough to address. In this paper, the authors underline this
increase in risk and identify the requirements for resources to be more
resilient in this type of environment while keeping an important level of
flexibility. In addition, the authors propose an architectural model of Self
Managed Security Cell, which leverages on current knowledge in large scale
security systems, information management and autonomous systems.
|
[
{
"created": "Fri, 2 Mar 2012 12:17:20 GMT",
"version": "v1"
}
] |
2012-03-05
|
[
[
"de Leusse",
"Pierre",
""
],
[
"Periorellis",
"Panos",
""
],
[
"Dimitrakos",
"Theo",
""
],
[
"Nair",
"Srijith K.",
""
]
] |
The Internet of Things and Services is a rapidly growing concept that illustrates that the ever increasing amount of physical items of our daily life which become addressable through a network could be made more easily manageable and usable through the use of Services. This surge of exposed resources along with the level of privacy and value of the information they hold, together with the increase of their usage make for an augmentation in the number of the security threats and violation attempts that existing security systems do not appear robust enough to address. In this paper, the authors underline this increase in risk and identify the requirements for resources to be more resilient in this type of environment while keeping an important level of flexibility. In addition, the authors propose an architectural model of Self Managed Security Cell, which leverages on current knowledge in large scale security systems, information management and autonomous systems.
|
1912.03673
|
Matthias Rottmann
|
Matthias Rottmann, Kira Maag, Robin Chan, Fabian H\"uger, Peter
Schlicht, Hanno Gottschalk
|
Detection of False Positive and False Negative Samples in Semantic
Segmentation
| null | null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, deep learning methods have outperformed other methods in
image recognition. This has fostered imagination of potential application of
deep learning technology including safety relevant applications like the
interpretation of medical images or autonomous driving. The passage from
assistance of a human decision maker to ever more automated systems however
increases the need to properly handle the failure modes of deep learning
modules. In this contribution, we review a set of techniques for the
self-monitoring of machine-learning algorithms based on uncertainty
quantification. In particular, we apply this to the task of semantic
segmentation, where the machine learning algorithm decomposes an image
according to semantic categories. We discuss false positive and false negative
error modes at instance-level and review techniques for the detection of such
errors that have been recently proposed by the authors. We also give an outlook
on future research directions.
|
[
{
"created": "Sun, 8 Dec 2019 13:04:06 GMT",
"version": "v1"
}
] |
2019-12-10
|
[
[
"Rottmann",
"Matthias",
""
],
[
"Maag",
"Kira",
""
],
[
"Chan",
"Robin",
""
],
[
"Hüger",
"Fabian",
""
],
[
"Schlicht",
"Peter",
""
],
[
"Gottschalk",
"Hanno",
""
]
] |
In recent years, deep learning methods have outperformed other methods in image recognition. This has fostered imagination of potential application of deep learning technology including safety relevant applications like the interpretation of medical images or autonomous driving. The passage from assistance of a human decision maker to ever more automated systems however increases the need to properly handle the failure modes of deep learning modules. In this contribution, we review a set of techniques for the self-monitoring of machine-learning algorithms based on uncertainty quantification. In particular, we apply this to the task of semantic segmentation, where the machine learning algorithm decomposes an image according to semantic categories. We discuss false positive and false negative error modes at instance-level and review techniques for the detection of such errors that have been recently proposed by the authors. We also give an outlook on future research directions.
|
2103.17252
|
Giulia Dominijanni
|
Giulia Dominijanni, Solaiman Shokur, Gionata Salvietti, Sarah Buehler,
Erica Palmerini, Simone Rossi, Frederique De Vignemont, Andrea D'Avella,
Tamar R. Makin, Domenico Prattichizzo, Silvestro Micera
|
Enhancing human bodies with extra robotic arms and fingers: The Neural
Resource Allocation Problem
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The emergence of robot-based body augmentation promises exciting innovations
that will inform robotics, human-machine interaction, and wearable electronics.
Even though augmentative devices like extra robotic arms and fingers in many
ways build on restorative technologies, they introduce unique challenges for
bidirectional human-machine collaboration. Can humans adapt and learn to
operate a new limb collaboratively with their biological limbs without
sacrificing their physical abilities? To successfully achieve robotic body
augmentation, we need to ensure that by giving a person an additional
(artificial) limb, we are not in fact trading off an existing (biological) one.
In this manuscript, we introduce the "Neural Resource Allocation" problem,
which distinguishes body augmentation from existing robotics paradigms such as
teleoperation and prosthetics. We discuss how to allow the effective and
effortless voluntary control of augmentative devices without compromising the
voluntary control of the biological body. In reviewing the relevant literature
on extra robotic fingers and limbs we critically assess the range of potential
solutions available for the "Neural Resource Allocation" problem. For this
purpose, we combine multiple perspectives from engineering and neuroscience
with considerations from human-machine interaction, sensory-motor integration,
ethics and law. Altogether we aim to define common foundations and operating
principles for the successful implementation of motor augmentation.
|
[
{
"created": "Wed, 31 Mar 2021 17:54:13 GMT",
"version": "v1"
}
] |
2021-04-01
|
[
[
"Dominijanni",
"Giulia",
""
],
[
"Shokur",
"Solaiman",
""
],
[
"Salvietti",
"Gionata",
""
],
[
"Buehler",
"Sarah",
""
],
[
"Palmerini",
"Erica",
""
],
[
"Rossi",
"Simone",
""
],
[
"De Vignemont",
"Frederique",
""
],
[
"D'Avella",
"Andrea",
""
],
[
"Makin",
"Tamar R.",
""
],
[
"Prattichizzo",
"Domenico",
""
],
[
"Micera",
"Silvestro",
""
]
] |
The emergence of robot-based body augmentation promises exciting innovations that will inform robotics, human-machine interaction, and wearable electronics. Even though augmentative devices like extra robotic arms and fingers in many ways build on restorative technologies, they introduce unique challenges for bidirectional human-machine collaboration. Can humans adapt and learn to operate a new limb collaboratively with their biological limbs without sacrificing their physical abilities? To successfully achieve robotic body augmentation, we need to ensure that by giving a person an additional (artificial) limb, we are not in fact trading off an existing (biological) one. In this manuscript, we introduce the "Neural Resource Allocation" problem, which distinguishes body augmentation from existing robotics paradigms such as teleoperation and prosthetics. We discuss how to allow the effective and effortless voluntary control of augmentative devices without compromising the voluntary control of the biological body. In reviewing the relevant literature on extra robotic fingers and limbs we critically assess the range of potential solutions available for the "Neural Resource Allocation" problem. For this purpose, we combine multiple perspectives from engineering and neuroscience with considerations from human-machine interaction, sensory-motor integration, ethics and law. Altogether we aim to define common foundations and operating principles for the successful implementation of motor augmentation.
|
2307.11654
|
H\'ector Carri\'on
|
H\'ector Carri\'on, Narges Norouzi
|
FEDD -- Fair, Efficient, and Diverse Diffusion-based Lesion Segmentation
and Malignancy Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Skin diseases affect millions of people worldwide, across all ethnicities.
Increasing diagnosis accessibility requires fair and accurate segmentation and
classification of dermatology images. However, the scarcity of annotated
medical images, especially for rare diseases and underrepresented skin tones,
poses a challenge to the development of fair and accurate models. In this
study, we introduce a Fair, Efficient, and Diverse Diffusion-based framework
for skin lesion segmentation and malignancy classification. FEDD leverages
semantically meaningful feature embeddings learned through a denoising
diffusion probabilistic backbone and processes them via linear probes to
achieve state-of-the-art performance on Diverse Dermatology Images (DDI). We
achieve an improvement in intersection over union of 0.18, 0.13, 0.06, and 0.07
while using only 5%, 10%, 15%, and 20% labeled samples, respectively.
Additionally, FEDD trained on 10% of DDI demonstrates malignancy classification
accuracy of 81%, 14% higher compared to the state-of-the-art. We showcase high
efficiency in data-constrained scenarios while providing fair performance for
diverse skin tones and rare malignancy conditions. Our newly annotated DDI
segmentation masks and training code can be found on
https://github.com/hectorcarrion/fedd.
|
[
{
"created": "Fri, 21 Jul 2023 15:42:01 GMT",
"version": "v1"
}
] |
2023-07-24
|
[
[
"Carrión",
"Héctor",
""
],
[
"Norouzi",
"Narges",
""
]
] |
Skin diseases affect millions of people worldwide, across all ethnicities. Increasing diagnosis accessibility requires fair and accurate segmentation and classification of dermatology images. However, the scarcity of annotated medical images, especially for rare diseases and underrepresented skin tones, poses a challenge to the development of fair and accurate models. In this study, we introduce a Fair, Efficient, and Diverse Diffusion-based framework for skin lesion segmentation and malignancy classification. FEDD leverages semantically meaningful feature embeddings learned through a denoising diffusion probabilistic backbone and processes them via linear probes to achieve state-of-the-art performance on Diverse Dermatology Images (DDI). We achieve an improvement in intersection over union of 0.18, 0.13, 0.06, and 0.07 while using only 5%, 10%, 15%, and 20% labeled samples, respectively. Additionally, FEDD trained on 10% of DDI demonstrates malignancy classification accuracy of 81%, 14% higher compared to the state-of-the-art. We showcase high efficiency in data-constrained scenarios while providing fair performance for diverse skin tones and rare malignancy conditions. Our newly annotated DDI segmentation masks and training code can be found on https://github.com/hectorcarrion/fedd.
|
1808.00923
|
Valeria Vignudelli
|
Filippo Bonchi, Ana Sokolova, Valeria Vignudelli
|
The Theory of Traces for Systems with Nondeterminism, Probability, and
Termination
| null |
Logical Methods in Computer Science, Volume 18, Issue 2 (June 17,
2022) lmcs:6261
|
10.46298/lmcs-18(2:21)2022
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies trace-based equivalences for systems combining
nondeterministic and probabilistic choices. We show how trace semantics for
such processes can be recovered by instantiating a coalgebraic construction
known as the generalised powerset construction. We characterise and compare the
resulting semantics to known definitions of trace equivalences appearing in the
literature. Most of our results are based on the exciting interplay between
monads and their presentations via algebraic theories.
|
[
{
"created": "Thu, 2 Aug 2018 17:19:29 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Aug 2018 14:01:53 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Jan 2019 11:44:53 GMT",
"version": "v3"
},
{
"created": "Wed, 1 Apr 2020 11:07:38 GMT",
"version": "v4"
},
{
"created": "Mon, 8 Mar 2021 17:55:46 GMT",
"version": "v5"
},
{
"created": "Wed, 4 May 2022 10:22:40 GMT",
"version": "v6"
},
{
"created": "Thu, 16 Jun 2022 09:23:00 GMT",
"version": "v7"
}
] |
2023-06-22
|
[
[
"Bonchi",
"Filippo",
""
],
[
"Sokolova",
"Ana",
""
],
[
"Vignudelli",
"Valeria",
""
]
] |
This paper studies trace-based equivalences for systems combining nondeterministic and probabilistic choices. We show how trace semantics for such processes can be recovered by instantiating a coalgebraic construction known as the generalised powerset construction. We characterise and compare the resulting semantics to known definitions of trace equivalences appearing in the literature. Most of our results are based on the exciting interplay between monads and their presentations via algebraic theories.
|
2107.01428
|
Konrad Dabrowski
|
Konrad K. Dabrowski and Peter Jonsson and Sebastian Ordyniak and
George Osipov
|
Solving Infinite-Domain CSPs Using the Patchwork Property
|
34 pages, 2 figures. Parts of this article appeared in the
proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI
2021)
| null | null | null |
cs.AI cs.CC cs.DS cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The constraint satisfaction problem (CSP) has important applications in
computer science and AI. In particular, infinite-domain CSPs have been
intensively used in subareas of AI such as spatio-temporal reasoning. Since
constraint satisfaction is a computationally hard problem, much work has been
devoted to identifying restricted problems that are efficiently solvable. One
way of doing this is to restrict the interactions of variables and constraints,
and a highly successful approach is to bound the treewidth of the underlying
primal graph. Bodirsky & Dalmau [J. Comput. System. Sci. 79(1), 2013] and Huang
et al. [Artif. Intell. 195, 2013] proved that CSP$(\Gamma)$ can be solved in
$n^{f(w)}$ time (where $n$ is the size of the instance, $w$ is the treewidth of
the primal graph and $f$ is a computable function) for certain classes of
constraint languages $\Gamma$. We improve this bound to $f(w) \cdot n^{O(1)}$,
where the function $f$ only depends on the language $\Gamma$, for CSPs whose
basic relations have the patchwork property. Hence, such problems are
fixed-parameter tractable and our algorithm is asymptotically faster than the
previous ones. Additionally, our approach is not restricted to binary
constraints, so it is applicable to a strictly larger class of problems than
that of Huang et al. However, there exist natural problems that are covered by
Bodirsky & Dalmau's algorithm but not by ours, and we begin investigating ways
of generalising our results to larger families of languages. We also analyse
our algorithm with respect to its running time and show that it is optimal
(under the Exponential Time Hypothesis) for certain languages such as Allen's
Interval Algebra.
|
[
{
"created": "Sat, 3 Jul 2021 13:04:41 GMT",
"version": "v1"
}
] |
2021-07-06
|
[
[
"Dabrowski",
"Konrad K.",
""
],
[
"Jonsson",
"Peter",
""
],
[
"Ordyniak",
"Sebastian",
""
],
[
"Osipov",
"George",
""
]
] |
The constraint satisfaction problem (CSP) has important applications in computer science and AI. In particular, infinite-domain CSPs have been intensively used in subareas of AI such as spatio-temporal reasoning. Since constraint satisfaction is a computationally hard problem, much work has been devoted to identifying restricted problems that are efficiently solvable. One way of doing this is to restrict the interactions of variables and constraints, and a highly successful approach is to bound the treewidth of the underlying primal graph. Bodirsky & Dalmau [J. Comput. System. Sci. 79(1), 2013] and Huang et al. [Artif. Intell. 195, 2013] proved that CSP$(\Gamma)$ can be solved in $n^{f(w)}$ time (where $n$ is the size of the instance, $w$ is the treewidth of the primal graph and $f$ is a computable function) for certain classes of constraint languages $\Gamma$. We improve this bound to $f(w) \cdot n^{O(1)}$, where the function $f$ only depends on the language $\Gamma$, for CSPs whose basic relations have the patchwork property. Hence, such problems are fixed-parameter tractable and our algorithm is asymptotically faster than the previous ones. Additionally, our approach is not restricted to binary constraints, so it is applicable to a strictly larger class of problems than that of Huang et al. However, there exist natural problems that are covered by Bodirsky & Dalmau's algorithm but not by ours, and we begin investigating ways of generalising our results to larger families of languages. We also analyse our algorithm with respect to its running time and show that it is optimal (under the Exponential Time Hypothesis) for certain languages such as Allen's Interval Algebra.
|
2208.14586
|
Yasunori Ishii Mr
|
Yuzuru Nakamura, Yasunori Ishii, Yuki Maruyama, Takayoshi Yamashita
|
Few-shot Adaptive Object Detection with Cross-Domain CutMix
|
Yuzuru Nakamura and Yasunori Ishii are equal contribution
| null | null | null |
cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In object detection, data amount and cost are a trade-off, and collecting a
large amount of data in a specific domain is labor intensive. Therefore,
existing large-scale datasets are used for pre-training. However, conventional
transfer learning and domain adaptation cannot bridge the domain gap when the
target domain differs significantly from the source domain. We propose a data
synthesis method that can solve the large domain gap problem. In this method, a
part of the target image is pasted onto the source image, and the position of
the pasted region is aligned by utilizing the information of the object
bounding box. In addition, we introduce adversarial learning to discriminate
whether the original or the pasted regions. The proposed method trains on a
large number of source images and a few target domain images. The proposed
method achieves higher accuracy than conventional methods in a very different
domain problem setting, where RGB images are the source domain, and thermal
infrared images are the target domain. Similarly, the proposed method achieves
higher accuracy in the cases of simulation images to real images.
|
[
{
"created": "Wed, 31 Aug 2022 01:26:10 GMT",
"version": "v1"
}
] |
2022-09-01
|
[
[
"Nakamura",
"Yuzuru",
""
],
[
"Ishii",
"Yasunori",
""
],
[
"Maruyama",
"Yuki",
""
],
[
"Yamashita",
"Takayoshi",
""
]
] |
In object detection, data amount and cost are a trade-off, and collecting a large amount of data in a specific domain is labor intensive. Therefore, existing large-scale datasets are used for pre-training. However, conventional transfer learning and domain adaptation cannot bridge the domain gap when the target domain differs significantly from the source domain. We propose a data synthesis method that can solve the large domain gap problem. In this method, a part of the target image is pasted onto the source image, and the position of the pasted region is aligned by utilizing the information of the object bounding box. In addition, we introduce adversarial learning to discriminate whether the original or the pasted regions. The proposed method trains on a large number of source images and a few target domain images. The proposed method achieves higher accuracy than conventional methods in a very different domain problem setting, where RGB images are the source domain, and thermal infrared images are the target domain. Similarly, the proposed method achieves higher accuracy in the cases of simulation images to real images.
|
2001.06626
|
Hengyi Cai
|
Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao,
Dawei Yin
|
Adaptive Parameterization for Neural Dialogue Generation
|
Published as a long paper in EMNLP 2019
| null |
10.18653/v1/D19-1188
| null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural conversation systems generate responses based on the
sequence-to-sequence (SEQ2SEQ) paradigm. Typically, the model is equipped with
a single set of learned parameters to generate responses for given input
contexts. When confronting diverse conversations, its adaptability is rather
limited and the model is hence prone to generate generic responses. In this
work, we propose an {\bf Ada}ptive {\bf N}eural {\bf D}ialogue generation
model, \textsc{AdaND}, which manages various conversations with
conversation-specific parameterization. For each conversation, the model
generates parameters of the encoder-decoder by referring to the input context.
In particular, we propose two adaptive parameterization mechanisms: a
context-aware and a topic-aware parameterization mechanism. The context-aware
parameterization directly generates the parameters by capturing local semantics
of the given context. The topic-aware parameterization enables parameter
sharing among conversations with similar topics by first inferring the latent
topics of the given context and then generating the parameters with respect to
the distributional topics. Extensive experiments conducted on a large-scale
real-world conversational dataset show that our model achieves superior
performance in terms of both quantitative metrics and human evaluations.
|
[
{
"created": "Sat, 18 Jan 2020 08:18:19 GMT",
"version": "v1"
}
] |
2020-01-22
|
[
[
"Cai",
"Hengyi",
""
],
[
"Chen",
"Hongshen",
""
],
[
"Zhang",
"Cheng",
""
],
[
"Song",
"Yonghao",
""
],
[
"Zhao",
"Xiaofang",
""
],
[
"Yin",
"Dawei",
""
]
] |
Neural conversation systems generate responses based on the sequence-to-sequence (SEQ2SEQ) paradigm. Typically, the model is equipped with a single set of learned parameters to generate responses for given input contexts. When confronting diverse conversations, its adaptability is rather limited and the model is hence prone to generate generic responses. In this work, we propose an {\bf Ada}ptive {\bf N}eural {\bf D}ialogue generation model, \textsc{AdaND}, which manages various conversations with conversation-specific parameterization. For each conversation, the model generates parameters of the encoder-decoder by referring to the input context. In particular, we propose two adaptive parameterization mechanisms: a context-aware and a topic-aware parameterization mechanism. The context-aware parameterization directly generates the parameters by capturing local semantics of the given context. The topic-aware parameterization enables parameter sharing among conversations with similar topics by first inferring the latent topics of the given context and then generating the parameters with respect to the distributional topics. Extensive experiments conducted on a large-scale real-world conversational dataset show that our model achieves superior performance in terms of both quantitative metrics and human evaluations.
|
1612.05794
|
Zeeshan Malik Khawar
|
Zeeshan Khawar Malik, Zain U. Hussain, Ziad Kobti, Charlie W. Lees,
Newton Howard and Amir Hussain
|
A new recurrent neural network based predictive model for Faecal
Calprotectin analysis: A retrospective study
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Faecal Calprotectin (FC) is a surrogate marker for intestinal inflammation,
termed Inflammatory Bowel Disease (IBD), but not for cancer. In this
retrospective study of 804 patients, an enhanced benchmark predictive model for
analyzing FC is developed, based on a novel state-of-the-art Echo State Network
(ESN), an advanced dynamic recurrent neural network which implements a
biologically plausible architecture, and a supervised learning mechanism. The
proposed machine learning driven predictive model is benchmarked against a
conventional logistic regression model, demonstrating statistically significant
performance improvements.
|
[
{
"created": "Sat, 17 Dec 2016 17:01:08 GMT",
"version": "v1"
}
] |
2016-12-20
|
[
[
"Malik",
"Zeeshan Khawar",
""
],
[
"Hussain",
"Zain U.",
""
],
[
"Kobti",
"Ziad",
""
],
[
"Lees",
"Charlie W.",
""
],
[
"Howard",
"Newton",
""
],
[
"Hussain",
"Amir",
""
]
] |
Faecal Calprotectin (FC) is a surrogate marker for intestinal inflammation, termed Inflammatory Bowel Disease (IBD), but not for cancer. In this retrospective study of 804 patients, an enhanced benchmark predictive model for analyzing FC is developed, based on a novel state-of-the-art Echo State Network (ESN), an advanced dynamic recurrent neural network which implements a biologically plausible architecture, and a supervised learning mechanism. The proposed machine learning driven predictive model is benchmarked against a conventional logistic regression model, demonstrating statistically significant performance improvements.
|
2105.06575
|
Daniel Larraz
|
Daniel Larraz, Micka\"el Laurent, Cesare Tinelli
|
Merit and Blame Assignment with Kind 2
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce two new major features of the open-source model checker Kind 2
which provide traceability information between specification and design
elements such as assumptions, guarantees, or other behavioral constraints in
synchronous reactive system models. This new version of Kind 2 can identify
minimal sets of design elements, known as Minimal Inductive Validity Cores,
which are sufficient to prove a given set of safety properties, and also
determine the set of MUST elements, design elements that are necessary to prove
the given properties. In addition, Kind 2 is able to find minimal sets of
design constraints, known as Minimal Cut Sets, whose violation leads the system
to an unsafe state. The computed information can be used for several purposes,
including assessing the quality of a system specification, tracking the safety
impact of model changes, and analyzing the tolerance and resilience of a system
against faults or cyber-attacks. We describe these new capabilities in some
detail and report on an initial experimental evaluation of some of them.
|
[
{
"created": "Thu, 13 May 2021 22:40:09 GMT",
"version": "v1"
}
] |
2021-05-17
|
[
[
"Larraz",
"Daniel",
""
],
[
"Laurent",
"Mickaël",
""
],
[
"Tinelli",
"Cesare",
""
]
] |
We introduce two new major features of the open-source model checker Kind 2 which provide traceability information between specification and design elements such as assumptions, guarantees, or other behavioral constraints in synchronous reactive system models. This new version of Kind 2 can identify minimal sets of design elements, known as Minimal Inductive Validity Cores, which are sufficient to prove a given set of safety properties, and also determine the set of MUST elements, design elements that are necessary to prove the given properties. In addition, Kind 2 is able to find minimal sets of design constraints, known as Minimal Cut Sets, whose violation leads the system to an unsafe state. The computed information can be used for several purposes, including assessing the quality of a system specification, tracking the safety impact of model changes, and analyzing the tolerance and resilience of a system against faults or cyber-attacks. We describe these new capabilities in some detail and report on an initial experimental evaluation of some of them.
|
2405.14078
|
Han-Dong Lim
|
Han-Dong Lim, Donghwan Lee
|
A finite time analysis of distributed Q-learning
| null | null | null | null |
cs.AI cs.LG cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-agent reinforcement learning (MARL) has witnessed a remarkable surge in
interest, fueled by the empirical success achieved in applications of
single-agent reinforcement learning (RL). In this study, we consider a
distributed Q-learning scenario, wherein a number of agents cooperatively solve
a sequential decision making problem without access to the central reward
function which is an average of the local rewards. In particular, we study
finite-time analysis of a distributed Q-learning algorithm, and provide a new
sample complexity result of $\tilde{\mathcal{O}}\left(
\min\left\{\frac{1}{\epsilon^2}\frac{t_{\text{mix}}}{(1-\gamma)^6 d_{\min}^4 }
,\frac{1}{\epsilon}\frac{\sqrt{|\gS||\gA|}}{(1-\sigma_2(\boldsymbol{W}))(1-\gamma)^4
d_{\min}^3} \right\}\right)$ under tabular lookup
|
[
{
"created": "Thu, 23 May 2024 00:52:38 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Lim",
"Han-Dong",
""
],
[
"Lee",
"Donghwan",
""
]
] |
Multi-agent reinforcement learning (MARL) has witnessed a remarkable surge in interest, fueled by the empirical success achieved in applications of single-agent reinforcement learning (RL). In this study, we consider a distributed Q-learning scenario, wherein a number of agents cooperatively solve a sequential decision making problem without access to the central reward function which is an average of the local rewards. In particular, we study finite-time analysis of a distributed Q-learning algorithm, and provide a new sample complexity result of $\tilde{\mathcal{O}}\left( \min\left\{\frac{1}{\epsilon^2}\frac{t_{\text{mix}}}{(1-\gamma)^6 d_{\min}^4 } ,\frac{1}{\epsilon}\frac{\sqrt{|\gS||\gA|}}{(1-\sigma_2(\boldsymbol{W}))(1-\gamma)^4 d_{\min}^3} \right\}\right)$ under tabular lookup
|
2207.12939
|
Kai H\"appeler
|
Senay Cakir, Marcel Gau{\ss}, Kai H\"appeler, Yassine Ounajjar, Fabian
Heinle and Reiner Marchthaler
|
Semantic Segmentation for Autonomous Driving: Model Evaluation, Dataset
Generation, Perspective Comparison, and Real-Time Capability
|
8 pages, 7 figures, 9 tables
| null | null | null |
cs.CV cs.AI cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Environmental perception is an important aspect within the field of
autonomous vehicles that provides crucial information about the driving domain,
including but not limited to identifying clear driving areas and surrounding
obstacles. Semantic segmentation is a widely used perception method for
self-driving cars that associates each pixel of an image with a predefined
class. In this context, several segmentation models are evaluated regarding
accuracy and efficiency. Experimental results on the generated dataset confirm
that the segmentation model FasterSeg is fast enough to be used in realtime on
lowpower computational (embedded) devices in self-driving cars. A simple method
is also introduced to generate synthetic training data for the model. Moreover,
the accuracy of the first-person perspective and the bird's eye view
perspective are compared. For a $320 \times 256$ input in the first-person
perspective, FasterSeg achieves $65.44\,\%$ mean Intersection over Union
(mIoU), and for a $320 \times 256$ input from the bird's eye view perspective,
FasterSeg achieves $64.08\,\%$ mIoU. Both perspectives achieve a frame rate of
$247.11$ Frames per Second (FPS) on the NVIDIA Jetson AGX Xavier. Lastly, the
frame rate and the accuracy with respect to the arithmetic 16-bit Floating
Point (FP16) and 32-bit Floating Point (FP32) of both perspectives are measured
and compared on the target hardware.
|
[
{
"created": "Tue, 26 Jul 2022 14:45:44 GMT",
"version": "v1"
}
] |
2022-07-27
|
[
[
"Cakir",
"Senay",
""
],
[
"Gauß",
"Marcel",
""
],
[
"Häppeler",
"Kai",
""
],
[
"Ounajjar",
"Yassine",
""
],
[
"Heinle",
"Fabian",
""
],
[
"Marchthaler",
"Reiner",
""
]
] |
Environmental perception is an important aspect within the field of autonomous vehicles that provides crucial information about the driving domain, including but not limited to identifying clear driving areas and surrounding obstacles. Semantic segmentation is a widely used perception method for self-driving cars that associates each pixel of an image with a predefined class. In this context, several segmentation models are evaluated regarding accuracy and efficiency. Experimental results on the generated dataset confirm that the segmentation model FasterSeg is fast enough to be used in realtime on lowpower computational (embedded) devices in self-driving cars. A simple method is also introduced to generate synthetic training data for the model. Moreover, the accuracy of the first-person perspective and the bird's eye view perspective are compared. For a $320 \times 256$ input in the first-person perspective, FasterSeg achieves $65.44\,\%$ mean Intersection over Union (mIoU), and for a $320 \times 256$ input from the bird's eye view perspective, FasterSeg achieves $64.08\,\%$ mIoU. Both perspectives achieve a frame rate of $247.11$ Frames per Second (FPS) on the NVIDIA Jetson AGX Xavier. Lastly, the frame rate and the accuracy with respect to the arithmetic 16-bit Floating Point (FP16) and 32-bit Floating Point (FP32) of both perspectives are measured and compared on the target hardware.
|
1109.2112
|
Andrew King
|
Maria Chudnovsky, Andrew D. King, Matthieu Plumettaz and Paul Seymour
|
A local strengthening of Reed's {\omega}, \Delta, {\chi} conjecture for
quasi-line graphs
|
18 pages, 1 figure
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reed's $\omega$, $\Delta$, $\chi$ conjecture proposes that every graph
satisfies $\chi\leq \lceil\frac 12(\Delta+1+\omega)\rceil$; it is known to hold
for all claw-free graphs. In this paper we consider a local strengthening of
this conjecture. We prove the local strengthening for line graphs, then note
that previous results immediately tell us that the local strengthening holds
for all quasi-line graphs. Our proofs lead to polytime algorithms for
constructing colourings that achieve our bounds: $O(n^2)$ for line graphs and
$O(n^3m^2)$ for quasi-line graphs. For line graphs, this is faster than the
best known algorithm for constructing a colouring that achieves the bound of
Reed's original conjecture.
|
[
{
"created": "Fri, 9 Sep 2011 19:58:34 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2011 01:00:08 GMT",
"version": "v2"
}
] |
2011-11-30
|
[
[
"Chudnovsky",
"Maria",
""
],
[
"King",
"Andrew D.",
""
],
[
"Plumettaz",
"Matthieu",
""
],
[
"Seymour",
"Paul",
""
]
] |
Reed's $\omega$, $\Delta$, $\chi$ conjecture proposes that every graph satisfies $\chi\leq \lceil\frac 12(\Delta+1+\omega)\rceil$; it is known to hold for all claw-free graphs. In this paper we consider a local strengthening of this conjecture. We prove the local strengthening for line graphs, then note that previous results immediately tell us that the local strengthening holds for all quasi-line graphs. Our proofs lead to polytime algorithms for constructing colourings that achieve our bounds: $O(n^2)$ for line graphs and $O(n^3m^2)$ for quasi-line graphs. For line graphs, this is faster than the best known algorithm for constructing a colouring that achieves the bound of Reed's original conjecture.
|
2009.10444
|
Manuel Aiple
|
Manuel Aiple, Andre Schiele and Frans C.T. van der Helm
|
Self-Adapting Variable Impedance Actuator Control for Precision and
Dynamic Tasks
|
12 pages, 13 figures, submitted to IEEE Transactions on Haptics
| null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variable impedance actuators (VIAs) as tool devices for teleoperation could
extend the range of tasks that humans can perform through a teleoperated robot
by mimicking the change of upper limb stiffness that humans perform for
different tasks, increasing the dynamic range of the robot. This requires
appropriate impedance control. Goal of this study is to show the effectiveness
of a controller that does not require additional sensors, reducing system
complexity and increasing ease of use. The controller should allow to perform
precise positioning tasks and dynamic tasks like hammering through
teleoperation with a VIA tool device automatically adapting the impedance
setting of the VIA. This is achieved by a control law according to the
principle "slow-stiff/fast-soft". The controller was tested in a human user
study with 24 participants comparing the human-machine performance with the
self-adapting controller in a bilateral telemanipulation experiment with two
tasks (precision/dynamic) using three impedance settings (high/low/adaptive
impedance). The results indicate that the proposed system performs equally well
as state of the art stiff teleoperation devices for precision tasks, while
having benefits in terms of increased safety and reduced wear for dynamic
tasks. This is a step towards teleoperation with a wide dynamic range.
|
[
{
"created": "Tue, 22 Sep 2020 10:51:59 GMT",
"version": "v1"
}
] |
2020-09-23
|
[
[
"Aiple",
"Manuel",
""
],
[
"Schiele",
"Andre",
""
],
[
"van der Helm",
"Frans C. T.",
""
]
] |
Variable impedance actuators (VIAs) as tool devices for teleoperation could extend the range of tasks that humans can perform through a teleoperated robot by mimicking the change of upper limb stiffness that humans perform for different tasks, increasing the dynamic range of the robot. This requires appropriate impedance control. Goal of this study is to show the effectiveness of a controller that does not require additional sensors, reducing system complexity and increasing ease of use. The controller should allow to perform precise positioning tasks and dynamic tasks like hammering through teleoperation with a VIA tool device automatically adapting the impedance setting of the VIA. This is achieved by a control law according to the principle "slow-stiff/fast-soft". The controller was tested in a human user study with 24 participants comparing the human-machine performance with the self-adapting controller in a bilateral telemanipulation experiment with two tasks (precision/dynamic) using three impedance settings (high/low/adaptive impedance). The results indicate that the proposed system performs equally well as state of the art stiff teleoperation devices for precision tasks, while having benefits in terms of increased safety and reduced wear for dynamic tasks. This is a step towards teleoperation with a wide dynamic range.
|
1803.00047
|
Myle Ott
|
Myle Ott and Michael Auli and David Grangier and Marc'Aurelio Ranzato
|
Analyzing Uncertainty in Neural Machine Translation
|
ICML 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine translation is a popular test bed for research in neural
sequence-to-sequence models but despite much recent research, there is still a
lack of understanding of these models. Practitioners report performance
degradation with large beams, the under-estimation of rare words and a lack of
diversity in the final translations. Our study relates some of these issues to
the inherent uncertainty of the task, due to the existence of multiple valid
translations for a single source sentence, and to the extrinsic uncertainty
caused by noisy training data. We propose tools and metrics to assess how
uncertainty in the data is captured by the model distribution and how it
affects search strategies that generate translations. Our results show that
search works remarkably well but that models tend to spread too much
probability mass over the hypothesis space. Next, we propose tools to assess
model calibration and show how to easily fix some shortcomings of current
models. As part of this study, we release multiple human reference translations
for two popular benchmarks.
|
[
{
"created": "Wed, 28 Feb 2018 19:33:24 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Mar 2018 15:10:32 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jun 2018 11:12:43 GMT",
"version": "v3"
},
{
"created": "Mon, 13 Aug 2018 17:13:23 GMT",
"version": "v4"
}
] |
2018-08-14
|
[
[
"Ott",
"Myle",
""
],
[
"Auli",
"Michael",
""
],
[
"Grangier",
"David",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
]
] |
Machine translation is a popular test bed for research in neural sequence-to-sequence models but despite much recent research, there is still a lack of understanding of these models. Practitioners report performance degradation with large beams, the under-estimation of rare words and a lack of diversity in the final translations. Our study relates some of these issues to the inherent uncertainty of the task, due to the existence of multiple valid translations for a single source sentence, and to the extrinsic uncertainty caused by noisy training data. We propose tools and metrics to assess how uncertainty in the data is captured by the model distribution and how it affects search strategies that generate translations. Our results show that search works remarkably well but that models tend to spread too much probability mass over the hypothesis space. Next, we propose tools to assess model calibration and show how to easily fix some shortcomings of current models. As part of this study, we release multiple human reference translations for two popular benchmarks.
|
2010.07990
|
Adrian de Wynter
|
Adrian de Wynter
|
An Algorithm for Learning Smaller Representations of Models With Scarce
Data
|
Preprint. Under review
| null | null | null |
cs.LG cs.AI cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We present a greedy algorithm for solving binary classification problems in
situations where the dataset is either too small or not fully representative of
the problem being solved, and obtaining more data is not possible. This
algorithm is of particular interest when training small models that have
trouble generalizing. It relies on a trained model with loose accuracy
constraints, an iterative hyperparameter pruning procedure, and a function used
to generate new data. Analysis on correctness and runtime complexity under
ideal conditions and an extension to deep neural networks is provided. In the
former case we obtain an asymptotic bound of
$O\left(|\Theta^2|\left(\log{|\Theta|} + |\theta^2| + T_f\left(|
D|\right)\right) + \bar{S}|\Theta||{E}|\right)$, where $|{\Theta}|$ is the
cardinality of the set of hyperparameters $\theta$ to be searched; $|{E}|$ and
$|{D}|$ are the sizes of the evaluation and training datasets, respectively;
$\bar{S}$ and $\bar{f}$ are the inference times for the trained model and the
candidate model; and $T_f({|{D}|})$ is a polynomial on $|{D}|$ and $\bar{f}$.
Under these conditions, this algorithm returns a solution that is $1 \leq r
\leq 2(1 - {2^{-|{\Theta}|}})$ times better than simply enumerating and
training with any $\theta \in \Theta$. As part of our analysis of the
generating function we also prove that, under certain assumptions, if an open
cover of $D$ has the same homology as the manifold where the support of the
underlying probability distribution lies, then $D$ is learnable, and viceversa.
|
[
{
"created": "Thu, 15 Oct 2020 19:17:51 GMT",
"version": "v1"
}
] |
2020-10-19
|
[
[
"de Wynter",
"Adrian",
""
]
] |
We present a greedy algorithm for solving binary classification problems in situations where the dataset is either too small or not fully representative of the problem being solved, and obtaining more data is not possible. This algorithm is of particular interest when training small models that have trouble generalizing. It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data. Analysis on correctness and runtime complexity under ideal conditions and an extension to deep neural networks is provided. In the former case we obtain an asymptotic bound of $O\left(|\Theta^2|\left(\log{|\Theta|} + |\theta^2| + T_f\left(| D|\right)\right) + \bar{S}|\Theta||{E}|\right)$, where $|{\Theta}|$ is the cardinality of the set of hyperparameters $\theta$ to be searched; $|{E}|$ and $|{D}|$ are the sizes of the evaluation and training datasets, respectively; $\bar{S}$ and $\bar{f}$ are the inference times for the trained model and the candidate model; and $T_f({|{D}|})$ is a polynomial on $|{D}|$ and $\bar{f}$. Under these conditions, this algorithm returns a solution that is $1 \leq r \leq 2(1 - {2^{-|{\Theta}|}})$ times better than simply enumerating and training with any $\theta \in \Theta$. As part of our analysis of the generating function we also prove that, under certain assumptions, if an open cover of $D$ has the same homology as the manifold where the support of the underlying probability distribution lies, then $D$ is learnable, and viceversa.
|
2111.11983
|
Michael Raskin
|
Michael Raskin
|
Modular population protocols
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Population protocols are a model of distributed computation intended for the
study of networks of independent computing agents with dynamic communication
structure. Each agent has a finite number of states, and communication
opportunities occur nondeterministically, allowing the agents involved to
change their states based on each other's states. Population protocols are
often studied in terms of reaching a consensus on whether the input
configuration satisfied some predicate.
A desirable property of a computation model is modularity, the ability to
combine existing simpler computations in a straightforward way. In the present
paper we present a more general notion of functionality implemented by a
population protocol in terms of multisets of inputs and outputs. This notion
allows to design multiphase protocols as combinations of independently defined
phases. The additional generality also increases the range of behaviours that
can be captured in applications (e.g. maintaining the role distribution in a
fleet of servers).
We show that composition of protocols can be performed in a uniform
mechanical way, and that the expressive power is essentially semilinear,
similar to the predicate expressive power in the original population protocol
setting.
|
[
{
"created": "Tue, 23 Nov 2021 16:24:45 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 16:50:03 GMT",
"version": "v2"
}
] |
2024-06-19
|
[
[
"Raskin",
"Michael",
""
]
] |
Population protocols are a model of distributed computation intended for the study of networks of independent computing agents with dynamic communication structure. Each agent has a finite number of states, and communication opportunities occur nondeterministically, allowing the agents involved to change their states based on each other's states. Population protocols are often studied in terms of reaching a consensus on whether the input configuration satisfied some predicate. A desirable property of a computation model is modularity, the ability to combine existing simpler computations in a straightforward way. In the present paper we present a more general notion of functionality implemented by a population protocol in terms of multisets of inputs and outputs. This notion allows to design multiphase protocols as combinations of independently defined phases. The additional generality also increases the range of behaviours that can be captured in applications (e.g. maintaining the role distribution in a fleet of servers). We show that composition of protocols can be performed in a uniform mechanical way, and that the expressive power is essentially semilinear, similar to the predicate expressive power in the original population protocol setting.
|
1512.00242
|
Haibing Wu
|
Haibing Wu and Xiaodong Gu
|
Towards Dropout Training for Convolutional Neural Networks
|
This paper has been published in Neural Networks,
http://www.sciencedirect.com/science/article/pii/S0893608015001446
|
Neural Networks 71: 1-10 (2015)
|
10.1016/j.neunet.2015.07.007
| null |
cs.LG cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, dropout has seen increasing use in deep learning. For deep
convolutional neural networks, dropout is known to work well in fully-connected
layers. However, its effect in convolutional and pooling layers is still not
clear. This paper demonstrates that max-pooling dropout is equivalent to
randomly picking activation based on a multinomial distribution at training
time. In light of this insight, we advocate employing our proposed
probabilistic weighted pooling, instead of commonly used max-pooling, to act as
model averaging at test time. Empirical evidence validates the superiority of
probabilistic weighted pooling. We also empirically show that the effect of
convolutional dropout is not trivial, despite the dramatically reduced
possibility of over-fitting due to the convolutional architecture. Elaborately
designing dropout training simultaneously in max-pooling and fully-connected
layers, we achieve state-of-the-art performance on MNIST, and very competitive
results on CIFAR-10 and CIFAR-100, relative to other approaches without data
augmentation. Finally, we compare max-pooling dropout and stochastic pooling,
both of which introduce stochasticity based on multinomial distributions at
pooling stage.
|
[
{
"created": "Tue, 1 Dec 2015 12:46:11 GMT",
"version": "v1"
}
] |
2015-12-02
|
[
[
"Wu",
"Haibing",
""
],
[
"Gu",
"Xiaodong",
""
]
] |
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage.
|
2404.08213
|
Jaewook Lee
|
Jaewook Lee, Jun Wang, Elizabeth Brown, Liam Chu, Sebastian S.
Rodriguez, Jon E. Froehlich
|
GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun
Disambiguation in Wearable Augmented Reality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Voice assistants (VAs) like Siri and Alexa are transforming human-computer
interaction; however, they lack awareness of users' spatiotemporal context,
resulting in limited performance and unnatural dialogue. We introduce
GazePointAR, a fully-functional context-aware VA for wearable augmented reality
that leverages eye gaze, pointing gestures, and conversation history to
disambiguate speech queries. With GazePointAR, users can ask "what's over
there?" or "how do I solve this math problem?" simply by looking and/or
pointing. We evaluated GazePointAR in a three-part lab study (N=12): (1)
comparing GazePointAR to two commercial systems; (2) examining GazePointAR's
pronoun disambiguation across three tasks; (3) and an open-ended phase where
participants could suggest and try their own context-sensitive queries.
Participants appreciated the naturalness and human-like nature of
pronoun-driven queries, although sometimes pronoun use was counter-intuitive.
We then iterated on GazePointAR and conducted a first-person diary study
examining how GazePointAR performs in-the-wild. We conclude by enumerating
limitations and design considerations for future context-aware VAs.
|
[
{
"created": "Fri, 12 Apr 2024 02:50:43 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Lee",
"Jaewook",
""
],
[
"Wang",
"Jun",
""
],
[
"Brown",
"Elizabeth",
""
],
[
"Chu",
"Liam",
""
],
[
"Rodriguez",
"Sebastian S.",
""
],
[
"Froehlich",
"Jon E.",
""
]
] |
Voice assistants (VAs) like Siri and Alexa are transforming human-computer interaction; however, they lack awareness of users' spatiotemporal context, resulting in limited performance and unnatural dialogue. We introduce GazePointAR, a fully-functional context-aware VA for wearable augmented reality that leverages eye gaze, pointing gestures, and conversation history to disambiguate speech queries. With GazePointAR, users can ask "what's over there?" or "how do I solve this math problem?" simply by looking and/or pointing. We evaluated GazePointAR in a three-part lab study (N=12): (1) comparing GazePointAR to two commercial systems; (2) examining GazePointAR's pronoun disambiguation across three tasks; (3) and an open-ended phase where participants could suggest and try their own context-sensitive queries. Participants appreciated the naturalness and human-like nature of pronoun-driven queries, although sometimes pronoun use was counter-intuitive. We then iterated on GazePointAR and conducted a first-person diary study examining how GazePointAR performs in-the-wild. We conclude by enumerating limitations and design considerations for future context-aware VAs.
|
2203.05085
|
Aloni Cohen
|
Aloni Cohen, Moon Duchin, JN Matthews, Bhushan Suwal
|
Census TopDown: The Impacts of Differential Privacy on Redistricting
|
2nd Symposium on Foundations of Responsible Computing (FORC 2021)
| null |
10.4230/LIPIcs.FORC.2021.5
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The 2020 Decennial Census will be released with a new disclosure avoidance
system in place, putting differential privacy in the spotlight for a wide range
of data users. We consider several key applications of Census data in
redistricting, developing tools and demonstrations for practitioners who are
concerned about the impacts of this new noising algorithm called TopDown. Based
on a close look at reconstructed Texas data, we find reassuring evidence that
TopDown will not threaten the ability to produce districts with tolerable
population balance or to detect signals of racial polarization for Voting
Rights Act enforcement.
|
[
{
"created": "Wed, 9 Mar 2022 23:28:53 GMT",
"version": "v1"
}
] |
2022-03-11
|
[
[
"Cohen",
"Aloni",
""
],
[
"Duchin",
"Moon",
""
],
[
"Matthews",
"JN",
""
],
[
"Suwal",
"Bhushan",
""
]
] |
The 2020 Decennial Census will be released with a new disclosure avoidance system in place, putting differential privacy in the spotlight for a wide range of data users. We consider several key applications of Census data in redistricting, developing tools and demonstrations for practitioners who are concerned about the impacts of this new noising algorithm called TopDown. Based on a close look at reconstructed Texas data, we find reassuring evidence that TopDown will not threaten the ability to produce districts with tolerable population balance or to detect signals of racial polarization for Voting Rights Act enforcement.
|
2012.03910
|
Sebastian Biewer
|
Sebastian Biewer, Rayna Dimitrova, Michael Fries, Maciej Gazda, Thomas
Heinze, Holger Hermanns and Mohammad Reza Mousavi
|
Conformance Relations and Hyperproperties for Doping Detection in Time
and Space
| null |
Logical Methods in Computer Science, Volume 18, Issue 1 (January
19, 2022) lmcs:6963
|
10.46298/lmcs-18(1:14)2022
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel and generalised notion of doping cleanness for
cyber-physical systems that allows for perturbing the inputs and observing the
perturbed outputs both in the time- and value-domains. We instantiate our
definition using existing notions of conformance for cyber-physical systems. As
a formal basis for monitoring conformance-based cleanness, we develop the
temporal logic HyperSTL*, an extension of Signal Temporal Logics with trace
quantifiers and a freeze operator. We show that our generalised definitions are
essential in a data-driven method for doping detection and apply our
definitions to a case study concerning diesel emission tests.
|
[
{
"created": "Mon, 7 Dec 2020 18:41:17 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jul 2021 15:40:23 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jan 2022 09:01:15 GMT",
"version": "v3"
}
] |
2023-06-22
|
[
[
"Biewer",
"Sebastian",
""
],
[
"Dimitrova",
"Rayna",
""
],
[
"Fries",
"Michael",
""
],
[
"Gazda",
"Maciej",
""
],
[
"Heinze",
"Thomas",
""
],
[
"Hermanns",
"Holger",
""
],
[
"Mousavi",
"Mohammad Reza",
""
]
] |
We present a novel and generalised notion of doping cleanness for cyber-physical systems that allows for perturbing the inputs and observing the perturbed outputs both in the time- and value-domains. We instantiate our definition using existing notions of conformance for cyber-physical systems. As a formal basis for monitoring conformance-based cleanness, we develop the temporal logic HyperSTL*, an extension of Signal Temporal Logics with trace quantifiers and a freeze operator. We show that our generalised definitions are essential in a data-driven method for doping detection and apply our definitions to a case study concerning diesel emission tests.
|
2203.14825
|
Richard Shaw
|
Richard Shaw, Sibi Catley-Chandar, Ales Leonardis, Eduardo
Perez-Pellitero
|
HDR Reconstruction from Bracketed Exposures and Events
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Reconstruction of high-quality HDR images is at the core of modern
computational photography. Significant progress has been made with multi-frame
HDR reconstruction methods, producing high-resolution, rich and accurate color
reconstructions with high-frequency details. However, they are still prone to
fail in dynamic or largely over-exposed scenes, where frame misalignment often
results in visible ghosting artifacts. Recent approaches attempt to alleviate
this by utilizing an event-based camera (EBC), which measures only binary
changes of illuminations. Despite their desirable high temporal resolution and
dynamic range characteristics, such approaches have not outperformed
traditional multi-frame reconstruction methods, mainly due to the lack of color
information and low-resolution sensors. In this paper, we propose to leverage
both bracketed LDR images and simultaneously captured events to obtain the best
of both worlds: high-quality RGB information from bracketed LDRs and
complementary high frequency and dynamic range information from events. We
present a multi-modal end-to-end learning-based HDR imaging system that fuses
bracketed images and event modalities in the feature domain using attention and
multi-scale spatial alignment modules. We propose a novel event-to-image
feature distillation module that learns to translate event features into the
image-feature space with self-supervision. Our framework exploits the higher
temporal resolution of events by sub-sampling the input event streams using a
sliding window, enriching our combined feature representation. Our proposed
approach surpasses SoTA multi-frame HDR reconstruction methods using synthetic
and real events, with a 2dB and 1dB improvement in PSNR-L and PSNR-mu on the
HdM HDR dataset, respectively.
|
[
{
"created": "Mon, 28 Mar 2022 15:04:41 GMT",
"version": "v1"
}
] |
2022-03-29
|
[
[
"Shaw",
"Richard",
""
],
[
"Catley-Chandar",
"Sibi",
""
],
[
"Leonardis",
"Ales",
""
],
[
"Perez-Pellitero",
"Eduardo",
""
]
] |
Reconstruction of high-quality HDR images is at the core of modern computational photography. Significant progress has been made with multi-frame HDR reconstruction methods, producing high-resolution, rich and accurate color reconstructions with high-frequency details. However, they are still prone to fail in dynamic or largely over-exposed scenes, where frame misalignment often results in visible ghosting artifacts. Recent approaches attempt to alleviate this by utilizing an event-based camera (EBC), which measures only binary changes of illuminations. Despite their desirable high temporal resolution and dynamic range characteristics, such approaches have not outperformed traditional multi-frame reconstruction methods, mainly due to the lack of color information and low-resolution sensors. In this paper, we propose to leverage both bracketed LDR images and simultaneously captured events to obtain the best of both worlds: high-quality RGB information from bracketed LDRs and complementary high frequency and dynamic range information from events. We present a multi-modal end-to-end learning-based HDR imaging system that fuses bracketed images and event modalities in the feature domain using attention and multi-scale spatial alignment modules. We propose a novel event-to-image feature distillation module that learns to translate event features into the image-feature space with self-supervision. Our framework exploits the higher temporal resolution of events by sub-sampling the input event streams using a sliding window, enriching our combined feature representation. Our proposed approach surpasses SoTA multi-frame HDR reconstruction methods using synthetic and real events, with a 2dB and 1dB improvement in PSNR-L and PSNR-mu on the HdM HDR dataset, respectively.
|
2303.01181
|
Maximilian Muschalik
|
Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke
H\"ullermeier
|
iSAGE: An Incremental Version of SAGE for Online Explanation on Data
Streams
| null | null |
10.1007/978-3-031-43418-1_26
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Existing methods for explainable artificial intelligence (XAI), including
popular feature importance measures such as SAGE, are mostly restricted to the
batch learning scenario. However, machine learning is often applied in dynamic
environments, where data arrives continuously and learning must be done in an
online manner. Therefore, we propose iSAGE, a time- and memory-efficient
incrementalization of SAGE, which is able to react to changes in the model as
well as to drift in the data-generating process. We further provide efficient
feature removal methods that break (interventional) and retain (observational)
feature dependencies. Moreover, we formally analyze our explanation method to
show that iSAGE adheres to similar theoretical properties as SAGE. Finally, we
evaluate our approach in a thorough experimental analysis based on
well-established data sets and data streams with concept drift.
|
[
{
"created": "Thu, 2 Mar 2023 11:51:54 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jun 2023 18:10:04 GMT",
"version": "v2"
}
] |
2023-10-31
|
[
[
"Muschalik",
"Maximilian",
""
],
[
"Fumagalli",
"Fabian",
""
],
[
"Hammer",
"Barbara",
""
],
[
"Hüllermeier",
"Eyke",
""
]
] |
Existing methods for explainable artificial intelligence (XAI), including popular feature importance measures such as SAGE, are mostly restricted to the batch learning scenario. However, machine learning is often applied in dynamic environments, where data arrives continuously and learning must be done in an online manner. Therefore, we propose iSAGE, a time- and memory-efficient incrementalization of SAGE, which is able to react to changes in the model as well as to drift in the data-generating process. We further provide efficient feature removal methods that break (interventional) and retain (observational) feature dependencies. Moreover, we formally analyze our explanation method to show that iSAGE adheres to similar theoretical properties as SAGE. Finally, we evaluate our approach in a thorough experimental analysis based on well-established data sets and data streams with concept drift.
|
1812.01393
|
Yongchao Xu
|
Yongchao Xu, Yukang Wang, Wei Zhou, Yongpan Wang, Zhibo Yang, Xiang
Bai
|
TextField: Learning A Deep Direction Field for Irregular Scene Text
Detection
|
To appear in IEEE TIP
| null |
10.1109/TIP.2019.2900589
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene text detection is an important step of scene text reading system. The
main challenges lie on significantly varied sizes and aspect ratios, arbitrary
orientations and shapes. Driven by recent progress in deep learning, impressive
performances have been achieved for multi-oriented text detection. Yet, the
performance drops dramatically in detecting curved texts due to the limited
text representation (e.g., horizontal bounding boxes, rotated rectangles, or
quadrilaterals). It is of great interest to detect curved texts, which are
actually very common in natural scenes. In this paper, we present a novel text
detector named TextField for detecting irregular scene texts. Specifically, we
learn a direction field pointing away from the nearest text boundary to each
text point. This direction field is represented by an image of two-dimensional
vectors and learned via a fully convolutional neural network. It encodes both
binary text mask and direction information used to separate adjacent text
instances, which is challenging for classical segmentation-based approaches.
Based on the learned direction field, we apply a simple yet effective
morphological-based post-processing to achieve the final detection.
Experimental results show that the proposed TextField outperforms the
state-of-the-art methods by a large margin (28% and 8%) on two curved text
datasets: Total-Text and CTW1500, respectively, and also achieves very
competitive performance on multi-oriented datasets: ICDAR 2015 and MSRA-TD500.
Furthermore, TextField is robust in generalizing to unseen datasets. The code
is available at https://github.com/YukangWang/TextField.
|
[
{
"created": "Tue, 4 Dec 2018 13:12:58 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2019 06:18:33 GMT",
"version": "v2"
}
] |
2019-10-02
|
[
[
"Xu",
"Yongchao",
""
],
[
"Wang",
"Yukang",
""
],
[
"Zhou",
"Wei",
""
],
[
"Wang",
"Yongpan",
""
],
[
"Yang",
"Zhibo",
""
],
[
"Bai",
"Xiang",
""
]
] |
Scene text detection is an important step of scene text reading system. The main challenges lie on significantly varied sizes and aspect ratios, arbitrary orientations and shapes. Driven by recent progress in deep learning, impressive performances have been achieved for multi-oriented text detection. Yet, the performance drops dramatically in detecting curved texts due to the limited text representation (e.g., horizontal bounding boxes, rotated rectangles, or quadrilaterals). It is of great interest to detect curved texts, which are actually very common in natural scenes. In this paper, we present a novel text detector named TextField for detecting irregular scene texts. Specifically, we learn a direction field pointing away from the nearest text boundary to each text point. This direction field is represented by an image of two-dimensional vectors and learned via a fully convolutional neural network. It encodes both binary text mask and direction information used to separate adjacent text instances, which is challenging for classical segmentation-based approaches. Based on the learned direction field, we apply a simple yet effective morphological-based post-processing to achieve the final detection. Experimental results show that the proposed TextField outperforms the state-of-the-art methods by a large margin (28% and 8%) on two curved text datasets: Total-Text and CTW1500, respectively, and also achieves very competitive performance on multi-oriented datasets: ICDAR 2015 and MSRA-TD500. Furthermore, TextField is robust in generalizing to unseen datasets. The code is available at https://github.com/YukangWang/TextField.
|
1912.03696
|
Ziyang Fan
|
Xiao Han, Ziyang Fan, Chao Li, Zeyang Liu, L.Jay Guo
|
High-Freedom Inverse Design with Deep Neural Network for Metasurface
Filter in the Visible
| null | null | null | null |
cs.OH eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to obtain a metasurface structure capable of filtering the light of
a specific wavelength in the visible band, traditional method usually traverses
the space consisting of possible designs, searching for a potentially
satisfying device by performing iterative calculations to solve Maxwell's
equations. In this paper, we propose a neural network that can complete an
inverse design process to solve the problem. Compared with the traditional
method, our method is much faster while competent of generating better devices
with the desired spectrum. One of the most significant advantages is that it
can handle a real spectrum as well as an artificial one. Besides, our method
encompasses a high degree of freedom to generate devices, ensuring their
generated spectra resemble desired ones and meeting the accuracy requirements
without losing practicability in the manufacturing process.
|
[
{
"created": "Sun, 8 Dec 2019 15:28:36 GMT",
"version": "v1"
}
] |
2019-12-10
|
[
[
"Han",
"Xiao",
""
],
[
"Fan",
"Ziyang",
""
],
[
"Li",
"Chao",
""
],
[
"Liu",
"Zeyang",
""
],
[
"Guo",
"L. Jay",
""
]
] |
In order to obtain a metasurface structure capable of filtering the light of a specific wavelength in the visible band, traditional method usually traverses the space consisting of possible designs, searching for a potentially satisfying device by performing iterative calculations to solve Maxwell's equations. In this paper, we propose a neural network that can complete an inverse design process to solve the problem. Compared with the traditional method, our method is much faster while competent of generating better devices with the desired spectrum. One of the most significant advantages is that it can handle a real spectrum as well as an artificial one. Besides, our method encompasses a high degree of freedom to generate devices, ensuring their generated spectra resemble desired ones and meeting the accuracy requirements without losing practicability in the manufacturing process.
|
1205.2476
|
Benoit Otjacques
|
Beno\^it Otjacques, Micka\"el Stefas, Ma\"el Cornil, Fernand Feltz
|
Open Data Visualization: Keeping Traces of the Exploration Process
|
Presented at the First International Workshop On Open Data, WOD-2012
(http://arxiv.org/abs/1204.3726)
| null | null |
WOD/2012/NANTES/1
|
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a system to support the visual exploration of Open Data.
During his/her interactive experience with the graphics, the user can easily
store the current complete state of the visualization application (called a
viewpoint). Next, he/she can compose sequences of these viewpoints (called
scenarios) that can easily be reloaded. This feature allows to keep traces of a
former exploration process, which can be useful in single user (to support
investigation carried out in multiple sessions) as well as in collaborative
setting (to share points of interest identified in the data set).
|
[
{
"created": "Fri, 11 May 2012 10:46:50 GMT",
"version": "v1"
}
] |
2012-05-14
|
[
[
"Otjacques",
"Benoît",
""
],
[
"Stefas",
"Mickaël",
""
],
[
"Cornil",
"Maël",
""
],
[
"Feltz",
"Fernand",
""
]
] |
This paper describes a system to support the visual exploration of Open Data. During his/her interactive experience with the graphics, the user can easily store the current complete state of the visualization application (called a viewpoint). Next, he/she can compose sequences of these viewpoints (called scenarios) that can easily be reloaded. This feature allows to keep traces of a former exploration process, which can be useful in single user (to support investigation carried out in multiple sessions) as well as in collaborative setting (to share points of interest identified in the data set).
|
1401.6333
|
Yang Yu
|
Yang Yu and Hong Qian
|
The Sampling-and-Learning Framework: A Statistical View of Evolutionary
Algorithms
| null | null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open.
|
[
{
"created": "Fri, 24 Jan 2014 13:10:11 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Apr 2014 14:29:27 GMT",
"version": "v2"
}
] |
2014-04-14
|
[
[
"Yu",
"Yang",
""
],
[
"Qian",
"Hong",
""
]
] |
Evolutionary algorithms (EAs), a large class of general purpose optimization algorithms inspired from the natural phenomena, are widely used in various industrial optimizations and often show excellent performance. This paper presents an attempt towards revealing their general power from a statistical view of EAs. By summarizing a large range of EAs into the sampling-and-learning framework, we show that the framework directly admits a general analysis on the probable-absolute-approximate (PAA) query complexity. We particularly focus on the framework with the learning subroutine being restricted as a binary classification, which results in the sampling-and-classification (SAC) algorithms. With the help of the learning theory, we obtain a general upper bound on the PAA query complexity of SAC algorithms. We further compare SAC algorithms with the uniform search in different situations. Under the error-target independence condition, we show that SAC algorithms can achieve polynomial speedup to the uniform search, but not super-polynomial speedup. Under the one-side-error condition, we show that super-polynomial speedup can be achieved. This work only touches the surface of the framework. Its power under other conditions is still open.
|
2408.00374
|
Xi Chen
|
Xi Chen, Rahul Bhadani, Larry Head
|
Conformal Trajectory Prediction with Multi-View Data Integration in
Cooperative Driving
| null | null | null | null |
cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Current research on trajectory prediction primarily relies on data collected
by onboard sensors of an ego vehicle. With the rapid advancement in connected
technologies, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure
(V2I) communication, valuable information from alternate views becomes
accessible via wireless networks. The integration of information from
alternative views has the potential to overcome the inherent limitations
associated with a single viewpoint, such as occlusions and limited field of
view. In this work, we introduce V2INet, a novel trajectory prediction
framework designed to model multi-view data by extending existing single-view
models. Unlike previous approaches where the multi-view data is manually fused
or formulated as a separate training stage, our model supports end-to-end
training, enhancing both flexibility and performance. Moreover, the predicted
multimodal trajectories are calibrated by a post-hoc conformal prediction
module to get valid and efficient confidence regions. We evaluated the entire
framework using the real-world V2I dataset V2X-Seq. Our results demonstrate
superior performance in terms of Final Displacement Error (FDE) and Miss Rate
(MR) using a single GPU. The code is publicly available at:
\url{https://github.com/xichennn/V2I_trajectory_prediction}.
|
[
{
"created": "Thu, 1 Aug 2024 08:32:03 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 13:00:46 GMT",
"version": "v2"
}
] |
2024-08-05
|
[
[
"Chen",
"Xi",
""
],
[
"Bhadani",
"Rahul",
""
],
[
"Head",
"Larry",
""
]
] |
Current research on trajectory prediction primarily relies on data collected by onboard sensors of an ego vehicle. With the rapid advancement in connected technologies, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication, valuable information from alternate views becomes accessible via wireless networks. The integration of information from alternative views has the potential to overcome the inherent limitations associated with a single viewpoint, such as occlusions and limited field of view. In this work, we introduce V2INet, a novel trajectory prediction framework designed to model multi-view data by extending existing single-view models. Unlike previous approaches where the multi-view data is manually fused or formulated as a separate training stage, our model supports end-to-end training, enhancing both flexibility and performance. Moreover, the predicted multimodal trajectories are calibrated by a post-hoc conformal prediction module to get valid and efficient confidence regions. We evaluated the entire framework using the real-world V2I dataset V2X-Seq. Our results demonstrate superior performance in terms of Final Displacement Error (FDE) and Miss Rate (MR) using a single GPU. The code is publicly available at: \url{https://github.com/xichennn/V2I_trajectory_prediction}.
|
1807.06414
|
Mehdi Ben Lazreg
|
Mehdi Ben Lazreg, Morten Goodwin
|
Combining a Context Aware Neural Network with a Denoising Autoencoder
for Measuring String Similarities
| null | null | null | null |
cs.IR cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measuring similarities between strings is central for many established and
fast growing research areas including information retrieval, biology, and
natural language processing. The traditional approach for string similarity
measurements is to define a metric over a word space that quantifies and sums
up the differences between characters in two strings. The state-of-the-art in
the area has, surprisingly, not evolved much during the last few decades. The
majority of the metrics are based on a simple comparison between character and
character distributions without consideration for the context of the words.
This paper proposes a string metric that encompasses similarities between
strings based on (1) the character similarities between the words including.
Non-Standard and standard spellings of the same words, and (2) the context of
the words. Our proposal is a neural network composed of a denoising autoencoder
and what we call a context encoder specifically designed to find similarities
between the words based on their context. The experimental results show that
the resulting metrics succeeds in 85.4\% of the cases in finding the correct
version of a non-standard spelling among the closest words, compared to 63.2\%
with the established Normalised-Levenshtein distance. Besides, we show that
words used in similar context are with our approach calculated to be similar
than words with different contexts, which is a desirable property missing in
established string metrics.
|
[
{
"created": "Mon, 16 Jul 2018 12:29:23 GMT",
"version": "v1"
}
] |
2018-08-20
|
[
[
"Lazreg",
"Mehdi Ben",
""
],
[
"Goodwin",
"Morten",
""
]
] |
Measuring similarities between strings is central for many established and fast growing research areas including information retrieval, biology, and natural language processing. The traditional approach for string similarity measurements is to define a metric over a word space that quantifies and sums up the differences between characters in two strings. The state-of-the-art in the area has, surprisingly, not evolved much during the last few decades. The majority of the metrics are based on a simple comparison between character and character distributions without consideration for the context of the words. This paper proposes a string metric that encompasses similarities between strings based on (1) the character similarities between the words including. Non-Standard and standard spellings of the same words, and (2) the context of the words. Our proposal is a neural network composed of a denoising autoencoder and what we call a context encoder specifically designed to find similarities between the words based on their context. The experimental results show that the resulting metrics succeeds in 85.4\% of the cases in finding the correct version of a non-standard spelling among the closest words, compared to 63.2\% with the established Normalised-Levenshtein distance. Besides, we show that words used in similar context are with our approach calculated to be similar than words with different contexts, which is a desirable property missing in established string metrics.
|
1402.5045
|
Lucas Paletta
|
Nicolas Sabouret, Haza\"el Jones, Magalie Ochs, Mathieu Chollet,
Catherine Pelachaud
|
Expressing social attitudes in virtual agents for social training games
| null | null | null |
IDGEI/2014/11
|
cs.HC cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of virtual agents in social coaching has increased rapidly in the
last decade. In order to train the user in different situations than can occur
in real life, the virtual agent should be able to express different social
attitudes. In this paper, we propose a model of social attitudes that enables a
virtual agent to reason on the appropriate social attitude to express during
the interaction with a user given the course of the interaction, but also the
emotions, mood and personality of the agent. Moreover, the model enables the
virtual agent to display its social attitude through its non-verbal behaviour.
The proposed model has been developed in the context of job interview
simulation. The methodology used to develop such a model combined a theoretical
and an empirical approach. Indeed, the model is based both on the literature in
Human and Social Sciences on social attitudes but also on the analysis of an
audiovisual corpus of job interviews and on post-hoc interviews with the
recruiters on their expressed attitudes during the job interview.
|
[
{
"created": "Thu, 20 Feb 2014 15:41:26 GMT",
"version": "v1"
}
] |
2014-02-21
|
[
[
"Sabouret",
"Nicolas",
""
],
[
"Jones",
"Hazaël",
""
],
[
"Ochs",
"Magalie",
""
],
[
"Chollet",
"Mathieu",
""
],
[
"Pelachaud",
"Catherine",
""
]
] |
The use of virtual agents in social coaching has increased rapidly in the last decade. In order to train the user in different situations than can occur in real life, the virtual agent should be able to express different social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent. Moreover, the model enables the virtual agent to display its social attitude through its non-verbal behaviour. The proposed model has been developed in the context of job interview simulation. The methodology used to develop such a model combined a theoretical and an empirical approach. Indeed, the model is based both on the literature in Human and Social Sciences on social attitudes but also on the analysis of an audiovisual corpus of job interviews and on post-hoc interviews with the recruiters on their expressed attitudes during the job interview.
|
1611.01769
|
Dmitry Kosolobov
|
Dominik Kempa and Dmitry Kosolobov
|
LZ-End Parsing in Compressed Space
|
12 pages, 4 figure
| null |
10.1109/DCC.2017.73
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an algorithm that constructs the LZ-End parsing (a variation of
LZ77) of a given string of length $n$ in $O(n\log\ell)$ expected time and $O(z
+ \ell)$ space, where $z$ is the number of phrases in the parsing and $\ell$ is
the length of the longest phrase. As an option, we can fix $\ell$ (e.g., to the
size of RAM) thus obtaining a reasonable LZ-End approximation with the same
functionality and the length of phrases restricted by $\ell$. This modified
algorithm constructs the parsing in streaming fashion in one left to right pass
on the input string w.h.p. and performs one right to left pass to verify the
correctness of the result. Experimentally comparing this version to other
LZ77-based analogs, we show that it is of practical interest.
|
[
{
"created": "Sun, 6 Nov 2016 12:47:25 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Jun 2017 21:38:21 GMT",
"version": "v2"
}
] |
2020-12-15
|
[
[
"Kempa",
"Dominik",
""
],
[
"Kosolobov",
"Dmitry",
""
]
] |
We present an algorithm that constructs the LZ-End parsing (a variation of LZ77) of a given string of length $n$ in $O(n\log\ell)$ expected time and $O(z + \ell)$ space, where $z$ is the number of phrases in the parsing and $\ell$ is the length of the longest phrase. As an option, we can fix $\ell$ (e.g., to the size of RAM) thus obtaining a reasonable LZ-End approximation with the same functionality and the length of phrases restricted by $\ell$. This modified algorithm constructs the parsing in streaming fashion in one left to right pass on the input string w.h.p. and performs one right to left pass to verify the correctness of the result. Experimentally comparing this version to other LZ77-based analogs, we show that it is of practical interest.
|
2403.11495
|
Yile Chen
|
Yile Chen, Xiucheng Li, Gao Cong, Zhifeng Bao, Cheng Long
|
Semantic-Enhanced Representation Learning for Road Networks with
Temporal Dynamics
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we introduce a novel framework called Toast for learning
general-purpose representations of road networks, along with its advanced
counterpart DyToast, designed to enhance the integration of temporal dynamics
to boost the performance of various time-sensitive downstream tasks.
Specifically, we propose to encode two pivotal semantic characteristics
intrinsic to road networks: traffic patterns and traveling semantics. To
achieve this, we refine the skip-gram module by incorporating auxiliary
objectives aimed at predicting the traffic context associated with a target
road segment. Moreover, we leverage trajectory data and design pre-training
strategies based on Transformer to distill traveling semantics on road
networks. DyToast further augments this framework by employing unified
trigonometric functions characterized by their beneficial properties, enabling
the capture of temporal evolution and dynamic nature of road networks more
effectively. With these proposed techniques, we can obtain representations that
encode multi-faceted aspects of knowledge within road networks, applicable
across both road segment-based applications and trajectory-based applications.
Extensive experiments on two real-world datasets across three tasks demonstrate
that our proposed framework consistently outperforms the state-of-the-art
baselines by a significant margin.
|
[
{
"created": "Mon, 18 Mar 2024 05:59:56 GMT",
"version": "v1"
}
] |
2024-03-19
|
[
[
"Chen",
"Yile",
""
],
[
"Li",
"Xiucheng",
""
],
[
"Cong",
"Gao",
""
],
[
"Bao",
"Zhifeng",
""
],
[
"Long",
"Cheng",
""
]
] |
In this study, we introduce a novel framework called Toast for learning general-purpose representations of road networks, along with its advanced counterpart DyToast, designed to enhance the integration of temporal dynamics to boost the performance of various time-sensitive downstream tasks. Specifically, we propose to encode two pivotal semantic characteristics intrinsic to road networks: traffic patterns and traveling semantics. To achieve this, we refine the skip-gram module by incorporating auxiliary objectives aimed at predicting the traffic context associated with a target road segment. Moreover, we leverage trajectory data and design pre-training strategies based on Transformer to distill traveling semantics on road networks. DyToast further augments this framework by employing unified trigonometric functions characterized by their beneficial properties, enabling the capture of temporal evolution and dynamic nature of road networks more effectively. With these proposed techniques, we can obtain representations that encode multi-faceted aspects of knowledge within road networks, applicable across both road segment-based applications and trajectory-based applications. Extensive experiments on two real-world datasets across three tasks demonstrate that our proposed framework consistently outperforms the state-of-the-art baselines by a significant margin.
|
2106.12807
|
Vijay Lingam
|
Vijay Lingam, Rahul Ragesh, Arun Iyer, Sundararajan Sellamanickam
|
Simple Truncated SVD based Model for Node Classification on Heterophilic
Graphs
|
Accepted at Deep Learning on Graphs: Method and Applications (DLG-KDD
2021)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have shown excellent performance on graphs that
exhibit strong homophily with respect to the node labels i.e. connected nodes
have same labels. However, they perform poorly on heterophilic graphs. Recent
approaches have typically modified aggregation schemes, designed adaptive graph
filters, etc. to address this limitation. In spite of this, the performance on
heterophilic graphs can still be poor. We propose a simple alternative method
that exploits Truncated Singular Value Decomposition (TSVD) of topological
structure and node features. Our approach achieves up to ~30% improvement in
performance over state-of-the-art methods on heterophilic graphs. This work is
an early investigation into methods that differ from aggregation based
approaches. Our experimental results suggest that it might be important to
explore other alternatives to aggregation methods for heterophilic setting.
|
[
{
"created": "Thu, 24 Jun 2021 07:48:18 GMT",
"version": "v1"
}
] |
2021-06-25
|
[
[
"Lingam",
"Vijay",
""
],
[
"Ragesh",
"Rahul",
""
],
[
"Iyer",
"Arun",
""
],
[
"Sellamanickam",
"Sundararajan",
""
]
] |
Graph Neural Networks (GNNs) have shown excellent performance on graphs that exhibit strong homophily with respect to the node labels i.e. connected nodes have same labels. However, they perform poorly on heterophilic graphs. Recent approaches have typically modified aggregation schemes, designed adaptive graph filters, etc. to address this limitation. In spite of this, the performance on heterophilic graphs can still be poor. We propose a simple alternative method that exploits Truncated Singular Value Decomposition (TSVD) of topological structure and node features. Our approach achieves up to ~30% improvement in performance over state-of-the-art methods on heterophilic graphs. This work is an early investigation into methods that differ from aggregation based approaches. Our experimental results suggest that it might be important to explore other alternatives to aggregation methods for heterophilic setting.
|
2211.11154
|
Wei Wei
|
Wei Wei, Daheng Li, Peng Wang, Yiming Li, Wanyi Li, Yongkang Luo, Jun
Zhong
|
DVGG: Deep Variational Grasp Generation for Dextrous Manipulation
|
Accepted by Robotics and Automation Letters (RA-L, 2021)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grasping with anthropomorphic robotic hands involves much more hand-object
interactions compared to parallel-jaw grippers. Modeling hand-object
interactions is essential to the study of multi-finger hand dextrous
manipulation. This work presents DVGG, an efficient grasp generation network
that takes single-view observation as input and predicts high-quality grasp
configurations for unknown objects. In general, our generative model consists
of three components: 1) Point cloud completion for the target object based on
the partial observation; 2) Diverse sets of grasps generation given the
complete point cloud; 3) Iterative grasp pose refinement for physically
plausible grasp optimization. To train our model, we build a large-scale
grasping dataset that contains about 300 common object models with 1.5M
annotated grasps in simulation. Experiments in simulation show that our model
can predict robust grasp poses with a wide variety and high success rate. Real
robot platform experiments demonstrate that the model trained on our dataset
performs well in the real world. Remarkably, our method achieves a grasp
success rate of 70.7\% for novel objects in the real robot platform, which is a
significant improvement over the baseline methods.
|
[
{
"created": "Mon, 21 Nov 2022 02:34:52 GMT",
"version": "v1"
}
] |
2022-11-22
|
[
[
"Wei",
"Wei",
""
],
[
"Li",
"Daheng",
""
],
[
"Wang",
"Peng",
""
],
[
"Li",
"Yiming",
""
],
[
"Li",
"Wanyi",
""
],
[
"Luo",
"Yongkang",
""
],
[
"Zhong",
"Jun",
""
]
] |
Grasping with anthropomorphic robotic hands involves much more hand-object interactions compared to parallel-jaw grippers. Modeling hand-object interactions is essential to the study of multi-finger hand dextrous manipulation. This work presents DVGG, an efficient grasp generation network that takes single-view observation as input and predicts high-quality grasp configurations for unknown objects. In general, our generative model consists of three components: 1) Point cloud completion for the target object based on the partial observation; 2) Diverse sets of grasps generation given the complete point cloud; 3) Iterative grasp pose refinement for physically plausible grasp optimization. To train our model, we build a large-scale grasping dataset that contains about 300 common object models with 1.5M annotated grasps in simulation. Experiments in simulation show that our model can predict robust grasp poses with a wide variety and high success rate. Real robot platform experiments demonstrate that the model trained on our dataset performs well in the real world. Remarkably, our method achieves a grasp success rate of 70.7\% for novel objects in the real robot platform, which is a significant improvement over the baseline methods.
|
2402.04453
|
Tobias Vente
|
Tobias Vente, Joeran Beel
|
The Potential of AutoML for Recommender Systems
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated Machine Learning (AutoML) has greatly advanced applications of
Machine Learning (ML) including model compression, machine translation, and
computer vision. Recommender Systems (RecSys) can be seen as an application of
ML. Yet, AutoML has found little attention in the RecSys community; nor has
RecSys found notable attention in the AutoML community. Only few and relatively
simple Automated Recommender Systems (AutoRecSys) libraries exist that adopt
AutoML techniques. However, these libraries are based on student projects and
do not offer the features and thorough development of AutoML libraries. We set
out to determine how AutoML libraries perform in the scenario of an
inexperienced user who wants to implement a recommender system. We compared the
predictive performance of 60 AutoML, AutoRecSys, ML, and RecSys algorithms from
15 libraries, including a mean predictor baseline, on 14 explicit feedback
RecSys datasets. To simulate the perspective of an inexperienced user, the
algorithms were evaluated with default hyperparameters. We found that AutoML
and AutoRecSys libraries performed best. AutoML libraries performed best for
six of the 14 datasets (43%), but it was not always the same AutoML library
performing best. The single-best library was the AutoRecSys library
Auto-Surprise, which performed best on five datasets (36%). On three datasets
(21%), AutoML libraries performed poorly, and RecSys libraries with default
parameters performed best. Although, while obtaining 50% of all placements in
the top five per dataset, RecSys algorithms fall behind AutoML on average. ML
algorithms generally performed the worst.
|
[
{
"created": "Tue, 6 Feb 2024 22:42:28 GMT",
"version": "v1"
}
] |
2024-02-08
|
[
[
"Vente",
"Tobias",
""
],
[
"Beel",
"Joeran",
""
]
] |
Automated Machine Learning (AutoML) has greatly advanced applications of Machine Learning (ML) including model compression, machine translation, and computer vision. Recommender Systems (RecSys) can be seen as an application of ML. Yet, AutoML has found little attention in the RecSys community; nor has RecSys found notable attention in the AutoML community. Only few and relatively simple Automated Recommender Systems (AutoRecSys) libraries exist that adopt AutoML techniques. However, these libraries are based on student projects and do not offer the features and thorough development of AutoML libraries. We set out to determine how AutoML libraries perform in the scenario of an inexperienced user who wants to implement a recommender system. We compared the predictive performance of 60 AutoML, AutoRecSys, ML, and RecSys algorithms from 15 libraries, including a mean predictor baseline, on 14 explicit feedback RecSys datasets. To simulate the perspective of an inexperienced user, the algorithms were evaluated with default hyperparameters. We found that AutoML and AutoRecSys libraries performed best. AutoML libraries performed best for six of the 14 datasets (43%), but it was not always the same AutoML library performing best. The single-best library was the AutoRecSys library Auto-Surprise, which performed best on five datasets (36%). On three datasets (21%), AutoML libraries performed poorly, and RecSys libraries with default parameters performed best. Although, while obtaining 50% of all placements in the top five per dataset, RecSys algorithms fall behind AutoML on average. ML algorithms generally performed the worst.
|
2403.07573
|
Masoud Shokrnezhad
|
Masoud Shokrnezhad, Hao Yu, Tarik Taleb, Richard Li, Kyunghan Lee,
Jaeseung Song, and Cedric Westphal
|
Towards a Dynamic Future with Adaptable Computing and Network
Convergence (ACNC)
| null | null | null | null |
cs.NI cs.AI cs.DC cs.ET cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of advancing 6G, a substantial paradigm shift is anticipated,
highlighting comprehensive everything-to-everything interactions characterized
by numerous connections and stringent adherence to Quality of
Service/Experience (QoS/E) prerequisites. The imminent challenge stems from
resource scarcity, prompting a deliberate transition to Computing-Network
Convergence (CNC) as an auspicious approach for joint resource orchestration.
While CNC-based mechanisms have garnered attention, their effectiveness in
realizing future services, particularly in use cases like the Metaverse, may
encounter limitations due to the continually changing nature of users,
services, and resources. Hence, this paper presents the concept of Adaptable
CNC (ACNC) as an autonomous Machine Learning (ML)-aided mechanism crafted for
the joint orchestration of computing and network resources, catering to dynamic
and voluminous user requests with stringent requirements. ACNC encompasses two
primary functionalities: state recognition and context detection. Given the
intricate nature of the user-service-computing-network space, the paper employs
dimension reduction to generate live, holistic, abstract system states in a
hierarchical structure. To address the challenges posed by dynamic changes,
Continual Learning (CL) is employed, classifying the system state into contexts
controlled by dedicated ML agents, enabling them to operate efficiently. These
two functionalities are intricately linked within a closed loop overseen by the
End-to-End (E2E) orchestrator to allocate resources. The paper introduces the
components of ACNC, proposes a Metaverse scenario to exemplify ACNC's role in
resource provisioning with Segment Routing v6 (SRv6), outlines ACNC's workflow,
details a numerical analysis for efficiency assessment, and concludes with
discussions on relevant challenges and potential avenues for future research.
|
[
{
"created": "Tue, 12 Mar 2024 12:03:16 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Shokrnezhad",
"Masoud",
""
],
[
"Yu",
"Hao",
""
],
[
"Taleb",
"Tarik",
""
],
[
"Li",
"Richard",
""
],
[
"Lee",
"Kyunghan",
""
],
[
"Song",
"Jaeseung",
""
],
[
"Westphal",
"Cedric",
""
]
] |
In the context of advancing 6G, a substantial paradigm shift is anticipated, highlighting comprehensive everything-to-everything interactions characterized by numerous connections and stringent adherence to Quality of Service/Experience (QoS/E) prerequisites. The imminent challenge stems from resource scarcity, prompting a deliberate transition to Computing-Network Convergence (CNC) as an auspicious approach for joint resource orchestration. While CNC-based mechanisms have garnered attention, their effectiveness in realizing future services, particularly in use cases like the Metaverse, may encounter limitations due to the continually changing nature of users, services, and resources. Hence, this paper presents the concept of Adaptable CNC (ACNC) as an autonomous Machine Learning (ML)-aided mechanism crafted for the joint orchestration of computing and network resources, catering to dynamic and voluminous user requests with stringent requirements. ACNC encompasses two primary functionalities: state recognition and context detection. Given the intricate nature of the user-service-computing-network space, the paper employs dimension reduction to generate live, holistic, abstract system states in a hierarchical structure. To address the challenges posed by dynamic changes, Continual Learning (CL) is employed, classifying the system state into contexts controlled by dedicated ML agents, enabling them to operate efficiently. These two functionalities are intricately linked within a closed loop overseen by the End-to-End (E2E) orchestrator to allocate resources. The paper introduces the components of ACNC, proposes a Metaverse scenario to exemplify ACNC's role in resource provisioning with Segment Routing v6 (SRv6), outlines ACNC's workflow, details a numerical analysis for efficiency assessment, and concludes with discussions on relevant challenges and potential avenues for future research.
|
2106.06519
|
Pakhi Bamdev
|
Karthik Ganesan, Pakhi Bamdev, Jaivarsan B, Amresh Venugopal, Abhinav
Tushar
|
N-Best ASR Transformer: Enhancing SLU Performance using Multiple ASR
Hypotheses
|
6 pages, 3 figures, Accepted at ACL 2021 as a main conference paper
| null | null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Spoken Language Understanding (SLU) systems parse speech into semantic
structures like dialog acts and slots. This involves the use of an Automatic
Speech Recognizer (ASR) to transcribe speech into multiple text alternatives
(hypotheses). Transcription errors, common in ASRs, impact downstream SLU
performance negatively. Approaches to mitigate such errors involve using richer
information from the ASR, either in form of N-best hypotheses or word-lattices.
We hypothesize that transformer models learn better with a simpler utterance
representation using the concatenation of the N-best ASR alternatives, where
each alternative is separated by a special delimiter [SEP]. In our work, we
test our hypothesis by using concatenated N-best ASR alternatives as the input
to transformer encoder models, namely BERT and XLM-RoBERTa, and achieve
performance equivalent to the prior state-of-the-art model on DSTC2 dataset. We
also show that our approach significantly outperforms the prior
state-of-the-art when subjected to the low data regime. Additionally, this
methodology is accessible to users of third-party ASR APIs which do not provide
word-lattice information.
|
[
{
"created": "Fri, 11 Jun 2021 17:29:00 GMT",
"version": "v1"
}
] |
2021-06-14
|
[
[
"Ganesan",
"Karthik",
""
],
[
"Bamdev",
"Pakhi",
""
],
[
"B",
"Jaivarsan",
""
],
[
"Venugopal",
"Amresh",
""
],
[
"Tushar",
"Abhinav",
""
]
] |
Spoken Language Understanding (SLU) systems parse speech into semantic structures like dialog acts and slots. This involves the use of an Automatic Speech Recognizer (ASR) to transcribe speech into multiple text alternatives (hypotheses). Transcription errors, common in ASRs, impact downstream SLU performance negatively. Approaches to mitigate such errors involve using richer information from the ASR, either in form of N-best hypotheses or word-lattices. We hypothesize that transformer models learn better with a simpler utterance representation using the concatenation of the N-best ASR alternatives, where each alternative is separated by a special delimiter [SEP]. In our work, we test our hypothesis by using concatenated N-best ASR alternatives as the input to transformer encoder models, namely BERT and XLM-RoBERTa, and achieve performance equivalent to the prior state-of-the-art model on DSTC2 dataset. We also show that our approach significantly outperforms the prior state-of-the-art when subjected to the low data regime. Additionally, this methodology is accessible to users of third-party ASR APIs which do not provide word-lattice information.
|
1504.01358
|
Logan Washbourne
|
Logan Washbourne
|
A Survey of P2P Network Security
|
12 pages, 6 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a review of peer-to-peer network security. Popular for
sharing of multimedia files, these networks carry risks and vulnerabilities
relating to data integrity, spyware, adware, and unwanted files. Further
attacks include those of forgery, pollution, repudiation, membership and
Eclipse attacks, neighbor selection attacks, Sybil, DoS, and omission attacks.
We review some protection mechanisms that have been devised.
|
[
{
"created": "Mon, 6 Apr 2015 19:10:03 GMT",
"version": "v1"
}
] |
2015-04-07
|
[
[
"Washbourne",
"Logan",
""
]
] |
This paper presents a review of peer-to-peer network security. Popular for sharing of multimedia files, these networks carry risks and vulnerabilities relating to data integrity, spyware, adware, and unwanted files. Further attacks include those of forgery, pollution, repudiation, membership and Eclipse attacks, neighbor selection attacks, Sybil, DoS, and omission attacks. We review some protection mechanisms that have been devised.
|
1912.02919
|
Stephanie L. Hyland
|
Stephanie L. Hyland and Shruti Tople
|
An Empirical Study on the Intrinsic Privacy of SGD
|
21 pages, 11 figures, 8 tables
| null | null | null |
cs.LG cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Introducing noise in the training of machine learning systems is a powerful
way to protect individual privacy via differential privacy guarantees, but
comes at a cost to utility. This work looks at whether the inherent randomness
of stochastic gradient descent (SGD) could contribute to privacy, effectively
reducing the amount of \emph{additional} noise required to achieve a given
privacy guarantee. We conduct a large-scale empirical study to examine this
question. Training a grid of over 120,000 models across four datasets (tabular
and images) on convex and non-convex objectives, we demonstrate that the random
seed has a larger impact on model weights than any individual training example.
We test the distribution over weights induced by the seed, finding that the
simple convex case can be modelled with a multivariate Gaussian posterior,
while neural networks exhibit multi-modal and non-Gaussian weight
distributions. By casting convex SGD as a Gaussian mechanism, we then estimate
an `intrinsic' data-dependent $\epsilon_i(\mathcal{D})$, finding values as low
as 6.3, dropping to 1.9 using empirical estimates. We use a membership
inference attack to estimate $\epsilon$ for non-convex SGD and demonstrate that
hiding the random seed from the adversary results in a statistically
significant reduction in attack performance, corresponding to a reduction in
the effective $\epsilon$. These results provide empirical evidence that SGD
exhibits appreciable variability relative to its dataset sensitivity, and this
`intrinsic noise' has the potential to be leveraged to improve the utility of
privacy-preserving machine learning.
|
[
{
"created": "Thu, 5 Dec 2019 23:28:05 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Mar 2020 16:08:31 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Jun 2020 11:46:04 GMT",
"version": "v3"
},
{
"created": "Mon, 28 Feb 2022 10:28:07 GMT",
"version": "v4"
}
] |
2022-03-01
|
[
[
"Hyland",
"Stephanie L.",
""
],
[
"Tople",
"Shruti",
""
]
] |
Introducing noise in the training of machine learning systems is a powerful way to protect individual privacy via differential privacy guarantees, but comes at a cost to utility. This work looks at whether the inherent randomness of stochastic gradient descent (SGD) could contribute to privacy, effectively reducing the amount of \emph{additional} noise required to achieve a given privacy guarantee. We conduct a large-scale empirical study to examine this question. Training a grid of over 120,000 models across four datasets (tabular and images) on convex and non-convex objectives, we demonstrate that the random seed has a larger impact on model weights than any individual training example. We test the distribution over weights induced by the seed, finding that the simple convex case can be modelled with a multivariate Gaussian posterior, while neural networks exhibit multi-modal and non-Gaussian weight distributions. By casting convex SGD as a Gaussian mechanism, we then estimate an `intrinsic' data-dependent $\epsilon_i(\mathcal{D})$, finding values as low as 6.3, dropping to 1.9 using empirical estimates. We use a membership inference attack to estimate $\epsilon$ for non-convex SGD and demonstrate that hiding the random seed from the adversary results in a statistically significant reduction in attack performance, corresponding to a reduction in the effective $\epsilon$. These results provide empirical evidence that SGD exhibits appreciable variability relative to its dataset sensitivity, and this `intrinsic noise' has the potential to be leveraged to improve the utility of privacy-preserving machine learning.
|
1801.02442
|
Ali Bereyhi
|
Ali Bereyhi, Mohammad Ali Sedaghat, Ralf R. M\"uller
|
Precoding via Approximate Message Passing with Instantaneous Signal
Constraints
|
2018 International Zurich Seminar on Information and Communication
(IZS) 5 pages and 2 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a low complexity precoding algorithm based on the
recently proposed Generalized Least Square Error (GLSE) scheme with generic
penalty and support. The algorithm iteratively constructs the transmit vector
via Approximate Message Passing (AMP). Using the asymptotic decoupling property
of GLSE precoders, we derive closed form fixed point equations to tune the
parameters in the proposed algorithm for a general set of instantaneous signal
constraints. The tuning strategy is then utilized to construct transmit vectors
with restricted peak-to-average power ratios and to efficiently select a subset
of transmit antennas. The numerical investigations show that the proposed
algorithm tracks the large-system performance of GLSE precoders even for a
moderate number of antennas.
|
[
{
"created": "Mon, 8 Jan 2018 14:33:23 GMT",
"version": "v1"
}
] |
2018-01-09
|
[
[
"Bereyhi",
"Ali",
""
],
[
"Sedaghat",
"Mohammad Ali",
""
],
[
"Müller",
"Ralf R.",
""
]
] |
This paper proposes a low complexity precoding algorithm based on the recently proposed Generalized Least Square Error (GLSE) scheme with generic penalty and support. The algorithm iteratively constructs the transmit vector via Approximate Message Passing (AMP). Using the asymptotic decoupling property of GLSE precoders, we derive closed form fixed point equations to tune the parameters in the proposed algorithm for a general set of instantaneous signal constraints. The tuning strategy is then utilized to construct transmit vectors with restricted peak-to-average power ratios and to efficiently select a subset of transmit antennas. The numerical investigations show that the proposed algorithm tracks the large-system performance of GLSE precoders even for a moderate number of antennas.
|
2407.01717
|
Ahmad AlMughrabi
|
Ahmad AlMughrabi, Umair Haroon, Ricardo Marques, Petia Radeva
|
VolETA: One- and Few-shot Food Volume Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Accurate food volume estimation is essential for dietary assessment,
nutritional tracking, and portion control applications. We present VolETA, a
sophisticated methodology for estimating food volume using 3D generative
techniques. Our approach creates a scaled 3D mesh of food objects using one- or
few-RGBD images. We start by selecting keyframes based on the RGB images and
then segmenting the reference object in the RGB images using XMem++.
Simultaneously, camera positions are estimated and refined using the PixSfM
technique. The segmented food images, reference objects, and camera poses are
combined to form a data model suitable for NeuS2. Independent mesh
reconstructions for reference and food objects are carried out, with scaling
factors determined using MeshLab based on the reference object. Moreover, depth
information is used to fine-tune the scaling factors by estimating the
potential volume range. The fine-tuned scaling factors are then applied to the
cleaned food meshes for accurate volume measurements. Similarly, we enter a
segmented RGB image to the One-2-3-45 model for one-shot food volume
estimation, resulting in a mesh. We then leverage the obtained scaling factors
to the cleaned food mesh for accurate volume measurements. Our experiments show
that our method effectively addresses occlusions, varying lighting conditions,
and complex food geometries, achieving robust and accurate volume estimations
with 10.97% MAPE using the MTF dataset. This innovative approach enhances the
precision of volume assessments and significantly contributes to computational
nutrition and dietary monitoring advancements.
|
[
{
"created": "Mon, 1 Jul 2024 18:47:15 GMT",
"version": "v1"
}
] |
2024-07-03
|
[
[
"AlMughrabi",
"Ahmad",
""
],
[
"Haroon",
"Umair",
""
],
[
"Marques",
"Ricardo",
""
],
[
"Radeva",
"Petia",
""
]
] |
Accurate food volume estimation is essential for dietary assessment, nutritional tracking, and portion control applications. We present VolETA, a sophisticated methodology for estimating food volume using 3D generative techniques. Our approach creates a scaled 3D mesh of food objects using one- or few-RGBD images. We start by selecting keyframes based on the RGB images and then segmenting the reference object in the RGB images using XMem++. Simultaneously, camera positions are estimated and refined using the PixSfM technique. The segmented food images, reference objects, and camera poses are combined to form a data model suitable for NeuS2. Independent mesh reconstructions for reference and food objects are carried out, with scaling factors determined using MeshLab based on the reference object. Moreover, depth information is used to fine-tune the scaling factors by estimating the potential volume range. The fine-tuned scaling factors are then applied to the cleaned food meshes for accurate volume measurements. Similarly, we enter a segmented RGB image to the One-2-3-45 model for one-shot food volume estimation, resulting in a mesh. We then leverage the obtained scaling factors to the cleaned food mesh for accurate volume measurements. Our experiments show that our method effectively addresses occlusions, varying lighting conditions, and complex food geometries, achieving robust and accurate volume estimations with 10.97% MAPE using the MTF dataset. This innovative approach enhances the precision of volume assessments and significantly contributes to computational nutrition and dietary monitoring advancements.
|
2212.11167
|
Yunlong Lin
|
Yunlong Lin, Zirui Li, Cheng Gong, Chao Lu, Xinwei Wang, Jianwei Gong
|
Continual Interactive Behavior Learning With Traffic Divergence
Measurement: A Dynamic Gradient Scenario Memory Approach
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing autonomous vehicles (AVs) helps improve the road safety and
traffic efficiency of intelligent transportation systems (ITS). Accurately
predicting the trajectories of traffic participants is essential to the
decision-making and motion planning of AVs in interactive scenarios. Recently,
learning-based trajectory predictors have shown state-of-the-art performance in
highway or urban areas. However, most existing learning-based models trained
with fixed datasets may perform poorly in continuously changing scenarios.
Specifically, they may not perform well in learned scenarios after learning the
new one. This phenomenon is called "catastrophic forgetting". Few studies
investigate trajectory predictions in continuous scenarios, where catastrophic
forgetting may happen. To handle this problem, first, a novel continual
learning (CL) approach for vehicle trajectory prediction is proposed in this
paper. Then, inspired by brain science, a dynamic memory mechanism is developed
by utilizing the measurement of traffic divergence between scenarios, which
balances the performance and training efficiency of the proposed CL approach.
Finally, datasets collected from different locations are used to design
continual training and testing methods in experiments. Experimental results
show that the proposed approach achieves consistently high prediction accuracy
in continuous scenarios without re-training, which mitigates catastrophic
forgetting compared to non-CL approaches. The implementation of the proposed
approach is publicly available at https://github.com/BIT-Jack/D-GSM
|
[
{
"created": "Wed, 21 Dec 2022 16:28:50 GMT",
"version": "v1"
}
] |
2022-12-22
|
[
[
"Lin",
"Yunlong",
""
],
[
"Li",
"Zirui",
""
],
[
"Gong",
"Cheng",
""
],
[
"Lu",
"Chao",
""
],
[
"Wang",
"Xinwei",
""
],
[
"Gong",
"Jianwei",
""
]
] |
Developing autonomous vehicles (AVs) helps improve the road safety and traffic efficiency of intelligent transportation systems (ITS). Accurately predicting the trajectories of traffic participants is essential to the decision-making and motion planning of AVs in interactive scenarios. Recently, learning-based trajectory predictors have shown state-of-the-art performance in highway or urban areas. However, most existing learning-based models trained with fixed datasets may perform poorly in continuously changing scenarios. Specifically, they may not perform well in learned scenarios after learning the new one. This phenomenon is called "catastrophic forgetting". Few studies investigate trajectory predictions in continuous scenarios, where catastrophic forgetting may happen. To handle this problem, first, a novel continual learning (CL) approach for vehicle trajectory prediction is proposed in this paper. Then, inspired by brain science, a dynamic memory mechanism is developed by utilizing the measurement of traffic divergence between scenarios, which balances the performance and training efficiency of the proposed CL approach. Finally, datasets collected from different locations are used to design continual training and testing methods in experiments. Experimental results show that the proposed approach achieves consistently high prediction accuracy in continuous scenarios without re-training, which mitigates catastrophic forgetting compared to non-CL approaches. The implementation of the proposed approach is publicly available at https://github.com/BIT-Jack/D-GSM
|
2107.10552
|
Konstantinos Makantasis
|
Konstantinos Makantasis, David Melhart, Antonios Liapis, Georgios N.
Yannakakis
|
Privileged Information for Modeling Affect In The Wild
|
8 pages, 4 figures, 2021 9th International Conference on Affective
Computing and Intelligent Interaction (ACII)
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge of affective computing research is discovering ways to
reliably transfer affect models that are built in the laboratory to real world
settings, namely in the wild. The existing gap between in vitro and in vivo
affect applications is mainly caused by limitations related to affect sensing
including intrusiveness, hardware malfunctions, availability of sensors, but
also privacy and security. As a response to these limitations in this paper we
are inspired by recent advances in machine learning and introduce the concept
of privileged information for operating affect models in the wild. The presence
of privileged information enables affect models to be trained across multiple
modalities available in a lab setting and ignore modalities that are not
available in the wild with no significant drop in their modeling performance.
The proposed privileged information framework is tested in a game arousal
corpus that contains physiological signals in the form of heart rate and
electrodermal activity, game telemetry, and pixels of footage from two
dissimilar games that are annotated with arousal traces. By training our
arousal models using all modalities (in vitro) and using solely pixels for
testing the models (in vivo), we reach levels of accuracy obtained from models
that fuse all modalities both for training and testing. The findings of this
paper make a decisive step towards realizing affect interaction in the wild.
|
[
{
"created": "Thu, 22 Jul 2021 10:09:16 GMT",
"version": "v1"
}
] |
2021-07-23
|
[
[
"Makantasis",
"Konstantinos",
""
],
[
"Melhart",
"David",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] |
A key challenge of affective computing research is discovering ways to reliably transfer affect models that are built in the laboratory to real world settings, namely in the wild. The existing gap between in vitro and in vivo affect applications is mainly caused by limitations related to affect sensing including intrusiveness, hardware malfunctions, availability of sensors, but also privacy and security. As a response to these limitations in this paper we are inspired by recent advances in machine learning and introduce the concept of privileged information for operating affect models in the wild. The presence of privileged information enables affect models to be trained across multiple modalities available in a lab setting and ignore modalities that are not available in the wild with no significant drop in their modeling performance. The proposed privileged information framework is tested in a game arousal corpus that contains physiological signals in the form of heart rate and electrodermal activity, game telemetry, and pixels of footage from two dissimilar games that are annotated with arousal traces. By training our arousal models using all modalities (in vitro) and using solely pixels for testing the models (in vivo), we reach levels of accuracy obtained from models that fuse all modalities both for training and testing. The findings of this paper make a decisive step towards realizing affect interaction in the wild.
|
2206.04381
|
Zheng Chang
|
Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, and Wen Gao
|
STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction
|
This journal paper is extended from our previous work accepted in
CVPR2022 and has been submitted to IEEE Transactions on Multimedia
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although significant achievements have been achieved by recurrent neural
network (RNN) based video prediction methods, their performance in datasets
with high resolutions is still far from satisfactory because of the information
loss problem and the perception-insensitive mean square error (MSE) based loss
functions. In this paper, we propose a Spatiotemporal Information-Preserving
and Perception-Augmented Model (STIP) to solve the above two problems. To solve
the information loss problem, the proposed model aims to preserve the
spatiotemporal information for videos during the feature extraction and the
state transitions, respectively. Firstly, a Multi-Grained Spatiotemporal
Auto-Encoder (MGST-AE) is designed based on the X-Net structure. The proposed
MGST-AE can help the decoders recall multi-grained information from the
encoders in both the temporal and spatial domains. In this way, more
spatiotemporal information can be preserved during the feature extraction for
high-resolution videos. Secondly, a Spatiotemporal Gated Recurrent Unit (STGRU)
is designed based on the standard Gated Recurrent Unit (GRU) structure, which
can efficiently preserve spatiotemporal information during the state
transitions. The proposed STGRU can achieve more satisfactory performance with
a much lower computation load compared with the popular Long Short-Term (LSTM)
based predictive memories. Furthermore, to improve the traditional MSE loss
functions, a Learned Perceptual Loss (LP-loss) is further designed based on the
Generative Adversarial Networks (GANs), which can help obtain a satisfactory
trade-off between the objective quality and the perceptual quality.
Experimental results show that the proposed STIP can predict videos with more
satisfactory visual quality compared with a variety of state-of-the-art
methods. Source code has been available at
\url{https://github.com/ZhengChang467/STIPHR}.
|
[
{
"created": "Thu, 9 Jun 2022 09:49:04 GMT",
"version": "v1"
}
] |
2022-06-10
|
[
[
"Chang",
"Zheng",
""
],
[
"Zhang",
"Xinfeng",
""
],
[
"Wang",
"Shanshe",
""
],
[
"Ma",
"Siwei",
""
],
[
"Gao",
"Wen",
""
]
] |
Although significant achievements have been achieved by recurrent neural network (RNN) based video prediction methods, their performance in datasets with high resolutions is still far from satisfactory because of the information loss problem and the perception-insensitive mean square error (MSE) based loss functions. In this paper, we propose a Spatiotemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems. To solve the information loss problem, the proposed model aims to preserve the spatiotemporal information for videos during the feature extraction and the state transitions, respectively. Firstly, a Multi-Grained Spatiotemporal Auto-Encoder (MGST-AE) is designed based on the X-Net structure. The proposed MGST-AE can help the decoders recall multi-grained information from the encoders in both the temporal and spatial domains. In this way, more spatiotemporal information can be preserved during the feature extraction for high-resolution videos. Secondly, a Spatiotemporal Gated Recurrent Unit (STGRU) is designed based on the standard Gated Recurrent Unit (GRU) structure, which can efficiently preserve spatiotemporal information during the state transitions. The proposed STGRU can achieve more satisfactory performance with a much lower computation load compared with the popular Long Short-Term (LSTM) based predictive memories. Furthermore, to improve the traditional MSE loss functions, a Learned Perceptual Loss (LP-loss) is further designed based on the Generative Adversarial Networks (GANs), which can help obtain a satisfactory trade-off between the objective quality and the perceptual quality. Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods. Source code has been available at \url{https://github.com/ZhengChang467/STIPHR}.
|
2310.14450
|
Hans Hanley
|
Hans W. A. Hanley, Zakir Durumeric
|
TATA: Stance Detection via Topic-Agnostic and Topic-Aware Embeddings
|
Accepted to EMNLP 2023; Updated citations
| null | null | null |
cs.CL cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stance detection is important for understanding different attitudes and
beliefs on the Internet. However, given that a passage's stance toward a given
topic is often highly dependent on that topic, building a stance detection
model that generalizes to unseen topics is difficult. In this work, we propose
using contrastive learning as well as an unlabeled dataset of news articles
that cover a variety of different topics to train topic-agnostic/TAG and
topic-aware/TAW embeddings for use in downstream stance detection. Combining
these embeddings in our full TATA model, we achieve state-of-the-art
performance across several public stance detection datasets (0.771 $F_1$-score
on the Zero-shot VAST dataset). We release our code and data at
https://github.com/hanshanley/tata.
|
[
{
"created": "Sun, 22 Oct 2023 23:23:44 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Nov 2023 03:22:32 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Feb 2024 15:17:15 GMT",
"version": "v3"
}
] |
2024-02-09
|
[
[
"Hanley",
"Hans W. A.",
""
],
[
"Durumeric",
"Zakir",
""
]
] |
Stance detection is important for understanding different attitudes and beliefs on the Internet. However, given that a passage's stance toward a given topic is often highly dependent on that topic, building a stance detection model that generalizes to unseen topics is difficult. In this work, we propose using contrastive learning as well as an unlabeled dataset of news articles that cover a variety of different topics to train topic-agnostic/TAG and topic-aware/TAW embeddings for use in downstream stance detection. Combining these embeddings in our full TATA model, we achieve state-of-the-art performance across several public stance detection datasets (0.771 $F_1$-score on the Zero-shot VAST dataset). We release our code and data at https://github.com/hanshanley/tata.
|
1404.1518
|
Aske Plaat
|
Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin
|
Nearly Optimal Minimax Tree Search?
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Knuth and Moore presented a theoretical lower bound on the number of leaves
that any fixed-depth minimax tree-search algorithm traversing a uniform tree
must explore, the so-called minimal tree. Since real-life minimax trees are not
uniform, the exact size of this tree is not known for most applications.
Further, most games have transpositions, implying that there exists a minimal
graph which is smaller than the minimal tree. For three games (chess, Othello
and checkers) we compute the size of the minimal tree and the minimal graph.
Empirical evidence shows that in all three games, enhanced Alpha-Beta search is
capable of building a tree that is close in size to that of the minimal graph.
Hence, it appears game-playing programs build nearly optimal search trees.
However, the conventional definition of the minimal graph is wrong. There are
ways in which the size of the minimal graph can be reduced: by maximizing the
number of transpositions in the search, and generating cutoffs using branches
that lead to smaller search trees. The conventional definition of the minimal
graph is just a left-most approximation. Calculating the size of the real
minimal graph is too computationally intensive. However, upper bound
approximations show it to be significantly smaller than the left-most minimal
graph. Hence, it appears that game-playing programs are not searching as
efficiently as is widely believed. Understanding the left-most and real minimal
search graphs leads to some new ideas for enhancing Alpha-Beta search. One of
them, enhanced transposition cutoffs, is shown to significantly reduce search
tree size.
|
[
{
"created": "Sat, 5 Apr 2014 20:13:58 GMT",
"version": "v1"
}
] |
2014-04-08
|
[
[
"Plaat",
"Aske",
""
],
[
"Schaeffer",
"Jonathan",
""
],
[
"Pijls",
"Wim",
""
],
[
"de Bruin",
"Arie",
""
]
] |
Knuth and Moore presented a theoretical lower bound on the number of leaves that any fixed-depth minimax tree-search algorithm traversing a uniform tree must explore, the so-called minimal tree. Since real-life minimax trees are not uniform, the exact size of this tree is not known for most applications. Further, most games have transpositions, implying that there exists a minimal graph which is smaller than the minimal tree. For three games (chess, Othello and checkers) we compute the size of the minimal tree and the minimal graph. Empirical evidence shows that in all three games, enhanced Alpha-Beta search is capable of building a tree that is close in size to that of the minimal graph. Hence, it appears game-playing programs build nearly optimal search trees. However, the conventional definition of the minimal graph is wrong. There are ways in which the size of the minimal graph can be reduced: by maximizing the number of transpositions in the search, and generating cutoffs using branches that lead to smaller search trees. The conventional definition of the minimal graph is just a left-most approximation. Calculating the size of the real minimal graph is too computationally intensive. However, upper bound approximations show it to be significantly smaller than the left-most minimal graph. Hence, it appears that game-playing programs are not searching as efficiently as is widely believed. Understanding the left-most and real minimal search graphs leads to some new ideas for enhancing Alpha-Beta search. One of them, enhanced transposition cutoffs, is shown to significantly reduce search tree size.
|
2310.12755
|
Yuanduo Hong
|
Yuanduo Hong, Jue Wang, Weichao Sun, and Huihui Pan
|
Minimalist and High-Performance Semantic Segmentation with Plain Vision
Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the wake of Masked Image Modeling (MIM), a diverse range of plain,
non-hierarchical Vision Transformer (ViT) models have been pre-trained with
extensive datasets, offering new paradigms and significant potential for
semantic segmentation. Current state-of-the-art systems incorporate numerous
inductive biases and employ cumbersome decoders. Building upon the original
motivations of plain ViTs, which are simplicity and generality, we explore
high-performance `minimalist' systems to this end. Our primary purpose is to
provide simple and efficient baselines for practical semantic segmentation with
plain ViTs. Specifically, we first explore the feasibility and methodology for
achieving high-performance semantic segmentation using the last feature map. As
a result, we introduce the PlainSeg, a model comprising only three 3$\times$3
convolutions in addition to the transformer layers (either encoder or decoder).
In this process, we offer insights into two underlying principles: (i)
high-resolution features are crucial to high performance in spite of employing
simple up-sampling techniques and (ii) the slim transformer decoder requires a
much larger learning rate than the wide transformer decoder. On this basis, we
further present the PlainSeg-Hier, which allows for the utilization of
hierarchical features. Extensive experiments on four popular benchmarks
demonstrate the high performance and efficiency of our methods. They can also
serve as powerful tools for assessing the transfer ability of base models in
semantic segmentation. Code is available at
\url{https://github.com/ydhongHIT/PlainSeg}.
|
[
{
"created": "Thu, 19 Oct 2023 14:01:40 GMT",
"version": "v1"
}
] |
2023-10-20
|
[
[
"Hong",
"Yuanduo",
""
],
[
"Wang",
"Jue",
""
],
[
"Sun",
"Weichao",
""
],
[
"Pan",
"Huihui",
""
]
] |
In the wake of Masked Image Modeling (MIM), a diverse range of plain, non-hierarchical Vision Transformer (ViT) models have been pre-trained with extensive datasets, offering new paradigms and significant potential for semantic segmentation. Current state-of-the-art systems incorporate numerous inductive biases and employ cumbersome decoders. Building upon the original motivations of plain ViTs, which are simplicity and generality, we explore high-performance `minimalist' systems to this end. Our primary purpose is to provide simple and efficient baselines for practical semantic segmentation with plain ViTs. Specifically, we first explore the feasibility and methodology for achieving high-performance semantic segmentation using the last feature map. As a result, we introduce the PlainSeg, a model comprising only three 3$\times$3 convolutions in addition to the transformer layers (either encoder or decoder). In this process, we offer insights into two underlying principles: (i) high-resolution features are crucial to high performance in spite of employing simple up-sampling techniques and (ii) the slim transformer decoder requires a much larger learning rate than the wide transformer decoder. On this basis, we further present the PlainSeg-Hier, which allows for the utilization of hierarchical features. Extensive experiments on four popular benchmarks demonstrate the high performance and efficiency of our methods. They can also serve as powerful tools for assessing the transfer ability of base models in semantic segmentation. Code is available at \url{https://github.com/ydhongHIT/PlainSeg}.
|
2106.11483
|
Tong Guo
|
Tong Guo
|
A Comprehensive Comparison of Pre-training Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the development of pre-trained language models has brought natural
language processing (NLP) tasks to the new state-of-the-art. In this paper we
explore the efficiency of various pre-trained language models. We pre-train a
list of transformer-based models with the same amount of text and the same
training steps. The experimental results shows that the most improvement upon
the origin BERT is adding the RNN-layer to capture more contextual information
for short text understanding. But the conclusion is: There are no remarkable
improvement for short text understanding for similar BERT structures.
Data-centric method[12] can achieve better performance.
|
[
{
"created": "Tue, 22 Jun 2021 02:12:29 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jul 2021 01:45:28 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Oct 2021 06:33:06 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Aug 2022 12:39:05 GMT",
"version": "v4"
},
{
"created": "Thu, 20 Oct 2022 03:38:29 GMT",
"version": "v5"
},
{
"created": "Wed, 26 Oct 2022 01:03:14 GMT",
"version": "v6"
},
{
"created": "Wed, 7 Dec 2022 07:54:57 GMT",
"version": "v7"
},
{
"created": "Tue, 7 Feb 2023 07:52:16 GMT",
"version": "v8"
},
{
"created": "Wed, 26 Jul 2023 01:56:20 GMT",
"version": "v9"
}
] |
2023-07-27
|
[
[
"Guo",
"Tong",
""
]
] |
Recently, the development of pre-trained language models has brought natural language processing (NLP) tasks to the new state-of-the-art. In this paper we explore the efficiency of various pre-trained language models. We pre-train a list of transformer-based models with the same amount of text and the same training steps. The experimental results shows that the most improvement upon the origin BERT is adding the RNN-layer to capture more contextual information for short text understanding. But the conclusion is: There are no remarkable improvement for short text understanding for similar BERT structures. Data-centric method[12] can achieve better performance.
|
2305.09933
|
Steven Macenski
|
Steve Macenski, Alberto Soragna, Michael Carroll, Zhenpeng Ge
|
Impact of ROS 2 Node Composition in Robotic Systems
|
IEEE Robotics and Automation Letters, 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The Robot Operating System 2 (ROS 2) is the second generation of ROS
representing a step forward in the robotic framework. Several new types of
nodes and executor models are integral to control where, how, and when
information is processed in the computational graph. This paper explores and
benchmarks one of these new node types -- the Component node -- which allows
nodes to be composed manually or dynamically into processes while retaining
separation of concerns in a codebase for distributed development. Composition
is shown to achieve a high degree of performance optimization, particularly
valuable for resource-constrained systems and sensor processing pipelines,
enabling distributed tasks that would not be otherwise possible in ROS 2. In
this work, we briefly introduce the significance and design of node
composition, then our contribution of benchmarking is provided to analyze its
impact on robotic systems. Its compelling influence on performance is shown
through several experiments on the latest Long Term Support (LTS) ROS 2
distribution, Humble Hawksbill.
|
[
{
"created": "Wed, 17 May 2023 03:39:32 GMT",
"version": "v1"
}
] |
2023-05-18
|
[
[
"Macenski",
"Steve",
""
],
[
"Soragna",
"Alberto",
""
],
[
"Carroll",
"Michael",
""
],
[
"Ge",
"Zhenpeng",
""
]
] |
The Robot Operating System 2 (ROS 2) is the second generation of ROS representing a step forward in the robotic framework. Several new types of nodes and executor models are integral to control where, how, and when information is processed in the computational graph. This paper explores and benchmarks one of these new node types -- the Component node -- which allows nodes to be composed manually or dynamically into processes while retaining separation of concerns in a codebase for distributed development. Composition is shown to achieve a high degree of performance optimization, particularly valuable for resource-constrained systems and sensor processing pipelines, enabling distributed tasks that would not be otherwise possible in ROS 2. In this work, we briefly introduce the significance and design of node composition, then our contribution of benchmarking is provided to analyze its impact on robotic systems. Its compelling influence on performance is shown through several experiments on the latest Long Term Support (LTS) ROS 2 distribution, Humble Hawksbill.
|
2401.08179
|
Christodoulos Peltekis
|
Christodoulos Peltekis, Vasileios Titopoulos, Chrysostomos Nicopoulos,
Giorgos Dimitrakopoulos
|
DeMM: A Decoupled Matrix Multiplication Engine Supporting Relaxed
Structured Sparsity
|
Accepted on the IEEE Computer Architecture Letters
| null |
10.1109/LCA.2024.3355178
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Learning (DL) has achieved unprecedented success in various application
domains. Meanwhile, model pruning has emerged as a viable solution to reduce
the footprint of DL models in mobile applications, without compromising their
accuracy. To enable the matrix engines built for dense DL models to also handle
their pruned counterparts, pruned DL models follow a fine-grained structured
sparsity pattern of 1:4, or 2:4, whereby in each group of four contiguous
values, at least one, or two, respectively, must be non-zero. Structured
sparsity has recently also moved to coarser (relaxed) cases of N:128, or N:256,
for small values of N, targeting a wider range of sparsity (10%-90%) for the DL
models. In this work, we design an accelerator that operates, by construction,
on wide blocks with relaxed structured sparsity. In contrast to the
conventional systolic array archetype, the new engine decouples the memory part
of the systolic array from the multiply-add units. The memory block comprises 1
write and N read ports, with the number of read ports being equal to the number
of non-zero elements per row. The multiply-add units connect directly to each
read port and complete the multiplication in a row-wise product-first order.
More importantly, simple reconfiguration facilitates more dense patterns. The
experimental evaluation demonstrates substantial latency improvements over
current state-of-the-art systolic array engines built for fine-grained and
relaxed structured sparsity.
|
[
{
"created": "Tue, 16 Jan 2024 07:51:15 GMT",
"version": "v1"
}
] |
2024-01-17
|
[
[
"Peltekis",
"Christodoulos",
""
],
[
"Titopoulos",
"Vasileios",
""
],
[
"Nicopoulos",
"Chrysostomos",
""
],
[
"Dimitrakopoulos",
"Giorgos",
""
]
] |
Deep Learning (DL) has achieved unprecedented success in various application domains. Meanwhile, model pruning has emerged as a viable solution to reduce the footprint of DL models in mobile applications, without compromising their accuracy. To enable the matrix engines built for dense DL models to also handle their pruned counterparts, pruned DL models follow a fine-grained structured sparsity pattern of 1:4, or 2:4, whereby in each group of four contiguous values, at least one, or two, respectively, must be non-zero. Structured sparsity has recently also moved to coarser (relaxed) cases of N:128, or N:256, for small values of N, targeting a wider range of sparsity (10%-90%) for the DL models. In this work, we design an accelerator that operates, by construction, on wide blocks with relaxed structured sparsity. In contrast to the conventional systolic array archetype, the new engine decouples the memory part of the systolic array from the multiply-add units. The memory block comprises 1 write and N read ports, with the number of read ports being equal to the number of non-zero elements per row. The multiply-add units connect directly to each read port and complete the multiplication in a row-wise product-first order. More importantly, simple reconfiguration facilitates more dense patterns. The experimental evaluation demonstrates substantial latency improvements over current state-of-the-art systolic array engines built for fine-grained and relaxed structured sparsity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.