id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.02345
|
Botong Zhu
|
Botong Zhu and Huobin Tan
|
VuLASTE: Long Sequence Model with Abstract Syntax Tree Embedding for
vulnerability Detection
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we build a model named VuLASTE, which regards vulnerability
detection as a special text classification task. To solve the vocabulary
explosion problem, VuLASTE uses a byte level BPE algorithm from natural
language processing. In VuLASTE, a new AST path embedding is added to represent
source code nesting information. We also use a combination of global and
dilated window attention from Longformer to extract long sequence semantic from
source code. To solve the data imbalance problem, which is a common problem in
vulnerability detection datasets, focal loss is used as loss function to make
model focus on poorly classified cases during training. To test our model
performance on real-world source code, we build a cross-language and
multi-repository vulnerability dataset from Github Security Advisory Database.
On this dataset, VuLASTE achieved top 50, top 100, top 200, top 500 hits of 29,
51, 86, 228, which are higher than state-of-art researches.
|
[
{
"created": "Sun, 5 Feb 2023 09:17:02 GMT",
"version": "v1"
}
] |
2023-02-07
|
[
[
"Zhu",
"Botong",
""
],
[
"Tan",
"Huobin",
""
]
] |
In this paper, we build a model named VuLASTE, which regards vulnerability detection as a special text classification task. To solve the vocabulary explosion problem, VuLASTE uses a byte level BPE algorithm from natural language processing. In VuLASTE, a new AST path embedding is added to represent source code nesting information. We also use a combination of global and dilated window attention from Longformer to extract long sequence semantic from source code. To solve the data imbalance problem, which is a common problem in vulnerability detection datasets, focal loss is used as loss function to make model focus on poorly classified cases during training. To test our model performance on real-world source code, we build a cross-language and multi-repository vulnerability dataset from Github Security Advisory Database. On this dataset, VuLASTE achieved top 50, top 100, top 200, top 500 hits of 29, 51, 86, 228, which are higher than state-of-art researches.
|
2009.07717
|
Sara Ahmed
|
Sara Atito Ali Ahmed, Berrin Yanikoglu
|
Relative Attribute Classification with Deep Rank SVM
| null | null |
10.1007/978-3-030-68790-8_51
| null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relative attributes indicate the strength of a particular attribute between
image pairs. We introduce a deep Siamese network with rank SVM loss function,
called Deep Rank SVM (DRSVM), in order to decide which one of a pair of images
has a stronger presence of a specific attribute. The network is trained in an
end-to-end fashion to jointly learn the visual features and the ranking
function. We demonstrate the effectiveness of our approach against the
state-of-the-art methods on four image benchmark datasets: LFW-10, PubFig,
UTZap50K-lexi and UTZap50K-2 datasets. DRSVM surpasses state-of-art in terms of
the average accuracy across attributes, on three of the four image benchmark
datasets.
|
[
{
"created": "Wed, 9 Sep 2020 09:21:39 GMT",
"version": "v1"
}
] |
2021-11-16
|
[
[
"Ahmed",
"Sara Atito Ali",
""
],
[
"Yanikoglu",
"Berrin",
""
]
] |
Relative attributes indicate the strength of a particular attribute between image pairs. We introduce a deep Siamese network with rank SVM loss function, called Deep Rank SVM (DRSVM), in order to decide which one of a pair of images has a stronger presence of a specific attribute. The network is trained in an end-to-end fashion to jointly learn the visual features and the ranking function. We demonstrate the effectiveness of our approach against the state-of-the-art methods on four image benchmark datasets: LFW-10, PubFig, UTZap50K-lexi and UTZap50K-2 datasets. DRSVM surpasses state-of-art in terms of the average accuracy across attributes, on three of the four image benchmark datasets.
|
1609.09541
|
H\'ector P\'erez L\'opez-Portillo
|
P\'erez L\'opez-Portillo, H\'ector, V\'azquez Gonz\'alez, Edgar
Ren\'e, Romero Hidalgo, Jorge Alberto
|
Knowledge management metrics for Public Organizations: A literature
review-based proposal
|
conference proceedings
| null |
10.13140/RG.2.2.24281.11368/1
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge Management (KM) is a relatively new phenomenon that appears in the
field of Public Sector Organizations (PSO) bringing new paradigms of
organizational management, challenges, risks and opportunities for its
implementation, development and evaluation. KM can be seen as a systematic and
deliberate effort to coordinate people, technology, organizational structures
and its environment through knowledge reuse and innovation. This management
approach has been established in parallel with the development and use of
information and communications technologies (ICT). Nowadays more PSO are
embodying KM practices in their core processes for support them, and as an
advanced management strategy to create a new culture based on technology and
resources efficiency. In this paper, we observed that KM can support
organizational goals in PSO. The aim of this paper is to understand KM factors
and its associated components, and propose KM metrics for measure KM programs
in PSO. Through a critical literature review we analysed diverse studies
related with KM performance indicators in PSO, then based on previous works we
summarized the more convenient this purpose. We found that, in academic
literature, studies about KM measurement in PSO are uncommon and emerging. As
well, in the last section of this paper, we present a proposal of KM metrics
for PSO, and some recommendations and practical implications for KM metrics
development in PSO. This academic endeavour seeks to contribute to theoretical
debate about KM measure development for KM initiatives in PSO.
|
[
{
"created": "Thu, 29 Sep 2016 22:36:04 GMT",
"version": "v1"
}
] |
2016-10-03
|
[
[
"López-Portillo",
"Pérez",
""
],
[
"Héctor",
"",
""
],
[
"González",
"Vázquez",
""
],
[
"René",
"Edgar",
""
],
[
"Hidalgo",
"Romero",
""
],
[
"Alberto",
"Jorge",
""
]
] |
Knowledge Management (KM) is a relatively new phenomenon that appears in the field of Public Sector Organizations (PSO) bringing new paradigms of organizational management, challenges, risks and opportunities for its implementation, development and evaluation. KM can be seen as a systematic and deliberate effort to coordinate people, technology, organizational structures and its environment through knowledge reuse and innovation. This management approach has been established in parallel with the development and use of information and communications technologies (ICT). Nowadays more PSO are embodying KM practices in their core processes for support them, and as an advanced management strategy to create a new culture based on technology and resources efficiency. In this paper, we observed that KM can support organizational goals in PSO. The aim of this paper is to understand KM factors and its associated components, and propose KM metrics for measure KM programs in PSO. Through a critical literature review we analysed diverse studies related with KM performance indicators in PSO, then based on previous works we summarized the more convenient this purpose. We found that, in academic literature, studies about KM measurement in PSO are uncommon and emerging. As well, in the last section of this paper, we present a proposal of KM metrics for PSO, and some recommendations and practical implications for KM metrics development in PSO. This academic endeavour seeks to contribute to theoretical debate about KM measure development for KM initiatives in PSO.
|
2404.15971
|
Xin Zhang
|
Xin Zhang, Wenwen Liu
|
Boosting Architectural Generation via Prompts: Report
|
Brief report of Achitectural prompts
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In the realm of AI architectural design, the importance of prompts is
becoming increasingly prominent. With advancements in artificial intelligence
and large-scale model technology, more design tasks are being delegated to
machine learning algorithms. This necessitates a method for designers to guide
algorithms in producing their desired designs. Prompts serve as a guiding and
motivational mechanism, playing a crucial role in AI-generated architectural
design. This paper categorizes and summarizes common vocabulary used in
architectural design, discussing how to craft effective prompts and their
impact on the quality and creativity of generated results. Through careful
prompt design, designers can better control the generated architectural design
images, thereby achieving designs that are more aligned with requirements and
innovative.
|
[
{
"created": "Wed, 24 Apr 2024 16:44:25 GMT",
"version": "v1"
}
] |
2024-04-25
|
[
[
"Zhang",
"Xin",
""
],
[
"Liu",
"Wenwen",
""
]
] |
In the realm of AI architectural design, the importance of prompts is becoming increasingly prominent. With advancements in artificial intelligence and large-scale model technology, more design tasks are being delegated to machine learning algorithms. This necessitates a method for designers to guide algorithms in producing their desired designs. Prompts serve as a guiding and motivational mechanism, playing a crucial role in AI-generated architectural design. This paper categorizes and summarizes common vocabulary used in architectural design, discussing how to craft effective prompts and their impact on the quality and creativity of generated results. Through careful prompt design, designers can better control the generated architectural design images, thereby achieving designs that are more aligned with requirements and innovative.
|
1806.10174
|
Emilia Apostolova PhD
|
Tony Wang, Tom Velez, Emilia Apostolova, Tim Tschampel, Thuy L. Ngo,
Joy Hardison
|
Semantically Enhanced Dynamic Bayesian Network for Detecting Sepsis
Mortality Risk in ICU Patients with Infection
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although timely sepsis diagnosis and prompt interventions in Intensive Care
Unit (ICU) patients are associated with reduced mortality, early clinical
recognition is frequently impeded by non-specific signs of infection and
failure to detect signs of sepsis-induced organ dysfunction in a constellation
of dynamically changing physiological data. The goal of this work is to
identify patient at risk of life-threatening sepsis utilizing a data-centered
and machine learning-driven approach. We derive a mortality risk predictive
dynamic Bayesian network (DBN) guided by a customized sepsis knowledgebase and
compare the predictive accuracy of the derived DBN with the Sepsis-related
Organ Failure Assessment (SOFA) score, the Quick SOFA (qSOFA) score, the
Simplified Acute Physiological Score (SAPS-II) and the Modified Early Warning
Score (MEWS) tools.
A customized sepsis ontology was used to derive the DBN node structure and
semantically characterize temporal features derived from both structured
physiological data and unstructured clinical notes. We assessed the performance
in predicting mortality risk of the DBN predictive model and compared
performance to other models using Receiver Operating Characteristic (ROC)
curves, area under curve (AUROC), calibration curves, and risk distributions.
The derived dataset consists of 24,506 ICU stays from 19,623 patients with
evidence of suspected infection, with 2,829 patients deceased at discharge. The
DBN AUROC was found to be 0.91, which outperformed the SOFA (0.843), qSOFA
(0.66), MEWS (0.73), and SAPS-II (0.77) scoring tools. Continuous Net
Reclassification Index and Integrated Discrimination Improvement analysis
supported the superiority DBN. Compared with conventional rule-based risk
scoring tools, the sepsis knowledgebase-driven DBN algorithm offers improved
performance for predicting mortality of infected patients in ICUs.
|
[
{
"created": "Tue, 26 Jun 2018 19:09:19 GMT",
"version": "v1"
}
] |
2018-06-28
|
[
[
"Wang",
"Tony",
""
],
[
"Velez",
"Tom",
""
],
[
"Apostolova",
"Emilia",
""
],
[
"Tschampel",
"Tim",
""
],
[
"Ngo",
"Thuy L.",
""
],
[
"Hardison",
"Joy",
""
]
] |
Although timely sepsis diagnosis and prompt interventions in Intensive Care Unit (ICU) patients are associated with reduced mortality, early clinical recognition is frequently impeded by non-specific signs of infection and failure to detect signs of sepsis-induced organ dysfunction in a constellation of dynamically changing physiological data. The goal of this work is to identify patient at risk of life-threatening sepsis utilizing a data-centered and machine learning-driven approach. We derive a mortality risk predictive dynamic Bayesian network (DBN) guided by a customized sepsis knowledgebase and compare the predictive accuracy of the derived DBN with the Sepsis-related Organ Failure Assessment (SOFA) score, the Quick SOFA (qSOFA) score, the Simplified Acute Physiological Score (SAPS-II) and the Modified Early Warning Score (MEWS) tools. A customized sepsis ontology was used to derive the DBN node structure and semantically characterize temporal features derived from both structured physiological data and unstructured clinical notes. We assessed the performance in predicting mortality risk of the DBN predictive model and compared performance to other models using Receiver Operating Characteristic (ROC) curves, area under curve (AUROC), calibration curves, and risk distributions. The derived dataset consists of 24,506 ICU stays from 19,623 patients with evidence of suspected infection, with 2,829 patients deceased at discharge. The DBN AUROC was found to be 0.91, which outperformed the SOFA (0.843), qSOFA (0.66), MEWS (0.73), and SAPS-II (0.77) scoring tools. Continuous Net Reclassification Index and Integrated Discrimination Improvement analysis supported the superiority DBN. Compared with conventional rule-based risk scoring tools, the sepsis knowledgebase-driven DBN algorithm offers improved performance for predicting mortality of infected patients in ICUs.
|
2301.03128
|
Aria Nosratinia
|
Heping Wan, Anders Host-Madsen, Aria Nosratinia
|
Compress-and-Forward via Multilevel Coding and Trellis Coded
Quantization
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compress-forward (CF) relays can improve communication rates even when the
relay cannot decode the source signal. Efficient implementation of CF is a
topic of contemporary interest, in part because of its potential impact on
wireless technologies such as cloud-RAN. There exists a gap between the
performance of CF implementations in the high spectral efficiency regime and
the corresponding information theoretic achievable rates. We begin by
re-framing a dilemma causing this gap, and propose an approach for its
mitigation. We utilize trellis coded quantization (TCQ) at the relay together
with multi-level coding at the source and relay, in a manner that facilitates
the calculation of bit LLRs at the destination for joint decoding. The
contributions of this work include designing TCQ for end-to-end relay
performance, since a distortion-minimizing TCQ is suboptimum. The reported
improvements include a 1dB gain over prior results for PSK modulation.
|
[
{
"created": "Mon, 9 Jan 2023 00:33:56 GMT",
"version": "v1"
}
] |
2023-01-10
|
[
[
"Wan",
"Heping",
""
],
[
"Host-Madsen",
"Anders",
""
],
[
"Nosratinia",
"Aria",
""
]
] |
Compress-forward (CF) relays can improve communication rates even when the relay cannot decode the source signal. Efficient implementation of CF is a topic of contemporary interest, in part because of its potential impact on wireless technologies such as cloud-RAN. There exists a gap between the performance of CF implementations in the high spectral efficiency regime and the corresponding information theoretic achievable rates. We begin by re-framing a dilemma causing this gap, and propose an approach for its mitigation. We utilize trellis coded quantization (TCQ) at the relay together with multi-level coding at the source and relay, in a manner that facilitates the calculation of bit LLRs at the destination for joint decoding. The contributions of this work include designing TCQ for end-to-end relay performance, since a distortion-minimizing TCQ is suboptimum. The reported improvements include a 1dB gain over prior results for PSK modulation.
|
2404.19154
|
Ning An
|
Ning An, Lei Hei, Yong Jiang, Weiping Meng, Jingjing Hu, Boran Huang,
Feiliang Ren
|
RTF: Region-based Table Filling Method for Relational Triple Extraction
|
Rejected by EMNLP 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relational triple extraction is crucial work for the automatic construction
of knowledge graphs. Existing methods only construct shallow representations
from a token or token pair-level. However, previous works ignore local spatial
dependencies of relational triples, resulting in a weakness of entity pair
boundary detection. To tackle this problem, we propose a novel Region-based
Table Filling method (RTF). We devise a novel region-based tagging scheme and
bi-directional decoding strategy, which regard each relational triple as a
region on the relation-specific table, and identifies triples by determining
two endpoints of each region. We also introduce convolution to construct
region-level table representations from a spatial perspective which makes
triples easier to be captured. In addition, we share partial tagging scores
among different relations to improve learning efficiency of relation
classifier. Experimental results show that our method achieves state-of-the-art
with better generalization capability on three variants of two widely used
benchmark datasets.
|
[
{
"created": "Mon, 29 Apr 2024 23:36:38 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2024 16:26:15 GMT",
"version": "v2"
}
] |
2024-06-14
|
[
[
"An",
"Ning",
""
],
[
"Hei",
"Lei",
""
],
[
"Jiang",
"Yong",
""
],
[
"Meng",
"Weiping",
""
],
[
"Hu",
"Jingjing",
""
],
[
"Huang",
"Boran",
""
],
[
"Ren",
"Feiliang",
""
]
] |
Relational triple extraction is crucial work for the automatic construction of knowledge graphs. Existing methods only construct shallow representations from a token or token pair-level. However, previous works ignore local spatial dependencies of relational triples, resulting in a weakness of entity pair boundary detection. To tackle this problem, we propose a novel Region-based Table Filling method (RTF). We devise a novel region-based tagging scheme and bi-directional decoding strategy, which regard each relational triple as a region on the relation-specific table, and identifies triples by determining two endpoints of each region. We also introduce convolution to construct region-level table representations from a spatial perspective which makes triples easier to be captured. In addition, we share partial tagging scores among different relations to improve learning efficiency of relation classifier. Experimental results show that our method achieves state-of-the-art with better generalization capability on three variants of two widely used benchmark datasets.
|
2305.08063
|
Jingbo Liu
|
Jingbo Liu
|
From Soft-Minoration to Information-Constrained Optimal Transport and
Spiked Tensor Models
|
ISIT 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $P_Z$ be a given distribution on $\mathbb{R}^n$. For any
$y\in\mathbb{R}^n$, we may interpret
$\rho(y):=\ln\mathbb{E}[e^{\left<y,Z\right>}]$ as a soft-max of
$\left<y,Z\right>$. We explore lower bounds on $\mathbb{E}[\rho(Y)]$ in terms
of the minimum mutual information $I(Z,\bar{Z})$ over $P_{Z\bar{Z}}$ which is a
coupling of $P_Z$ and itself such that $Z-\bar{Z}$ is bounded in a certain
sense. This may be viewed as a soft version of Sudakov's minoration, which
lower bounds the expected supremum of a stochastic process in terms of the
packing number. Our method is based on convex geometry (thrifty approximation
of convex bodies), and works for general non-Gaussian $Y$. When $Y$ is Gaussian
and $\bar{Z}$ converges to $Z$, this recovers a recent inequality of
Bai-Wu-Ozgur on information-constrained optimal transport, previously
established using Gaussian-specific techniques. We also use soft-minoration to
obtain asymptotically (in tensor order) tight bounds on the free energy in the
Sherrington-Kirkpatrick model with spins uniformly distributed on a type class,
implying asymptotically tight bounds for the type~II error exponent in spiked
tensor detection.
|
[
{
"created": "Sun, 14 May 2023 04:20:04 GMT",
"version": "v1"
}
] |
2023-05-16
|
[
[
"Liu",
"Jingbo",
""
]
] |
Let $P_Z$ be a given distribution on $\mathbb{R}^n$. For any $y\in\mathbb{R}^n$, we may interpret $\rho(y):=\ln\mathbb{E}[e^{\left<y,Z\right>}]$ as a soft-max of $\left<y,Z\right>$. We explore lower bounds on $\mathbb{E}[\rho(Y)]$ in terms of the minimum mutual information $I(Z,\bar{Z})$ over $P_{Z\bar{Z}}$ which is a coupling of $P_Z$ and itself such that $Z-\bar{Z}$ is bounded in a certain sense. This may be viewed as a soft version of Sudakov's minoration, which lower bounds the expected supremum of a stochastic process in terms of the packing number. Our method is based on convex geometry (thrifty approximation of convex bodies), and works for general non-Gaussian $Y$. When $Y$ is Gaussian and $\bar{Z}$ converges to $Z$, this recovers a recent inequality of Bai-Wu-Ozgur on information-constrained optimal transport, previously established using Gaussian-specific techniques. We also use soft-minoration to obtain asymptotically (in tensor order) tight bounds on the free energy in the Sherrington-Kirkpatrick model with spins uniformly distributed on a type class, implying asymptotically tight bounds for the type~II error exponent in spiked tensor detection.
|
1610.09530
|
Ying Cui
|
Chengjun Guo, Ying Cui, Derrick Wing Kwan Ng and Zhi Liu
|
Multi-Quality Multicast Beamforming based on Scalable Video Coding
|
30 pages, submitted to GLOBECOM 2017 and TCOM
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider multi-quality multicast beamforming of a video
stream from a multi-antenna base station (BS) to multiple single-antenna users
receiving different qualities of the same video stream, via scalable video
coding (SVC). Leveraging the layered structure of SVC and exploiting
superposition coding (SC) as well as successive interference cancelation (SIC),
we propose a layer-based multi-quality multicast beamforming scheme. To reduce
the complexity, we also propose a quality-based multi-quality multicast
beamforming scheme, which further utilizes the layered structure of SVC and
quality information of all users. Under each scheme, for given quality
requirements of all users, we formulate the corresponding optimal beamforming
design as a non-convex power minimization problem, and obtain a globally
optimal solution for a class of special cases as well as a locally optimal
solution for the general case. Then, we show that the minimum total
transmission power of the quality-based power minimization problem is the same
as that of the layer-based power minimization problem, although the former
incurs a lower computational complexity. Next, we consider the optimal joint
layer selection and quality-based multi quality multicast beamforming design to
maximize the total utility representing the satisfaction with the received
video quality for all users under a given maximum transmission power budget,
which is NP-hard in general. By exploiting the optimal solution of the
quality-based power minimization problem, we develop a greedy algorithm to
obtain a near optimal solution. Finally, numerical results show that the
proposed solutions achieve better performance than existing solutions.
|
[
{
"created": "Sat, 29 Oct 2016 15:27:08 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jul 2017 00:47:31 GMT",
"version": "v2"
}
] |
2017-07-18
|
[
[
"Guo",
"Chengjun",
""
],
[
"Cui",
"Ying",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Liu",
"Zhi",
""
]
] |
In this paper, we consider multi-quality multicast beamforming of a video stream from a multi-antenna base station (BS) to multiple single-antenna users receiving different qualities of the same video stream, via scalable video coding (SVC). Leveraging the layered structure of SVC and exploiting superposition coding (SC) as well as successive interference cancelation (SIC), we propose a layer-based multi-quality multicast beamforming scheme. To reduce the complexity, we also propose a quality-based multi-quality multicast beamforming scheme, which further utilizes the layered structure of SVC and quality information of all users. Under each scheme, for given quality requirements of all users, we formulate the corresponding optimal beamforming design as a non-convex power minimization problem, and obtain a globally optimal solution for a class of special cases as well as a locally optimal solution for the general case. Then, we show that the minimum total transmission power of the quality-based power minimization problem is the same as that of the layer-based power minimization problem, although the former incurs a lower computational complexity. Next, we consider the optimal joint layer selection and quality-based multi quality multicast beamforming design to maximize the total utility representing the satisfaction with the received video quality for all users under a given maximum transmission power budget, which is NP-hard in general. By exploiting the optimal solution of the quality-based power minimization problem, we develop a greedy algorithm to obtain a near optimal solution. Finally, numerical results show that the proposed solutions achieve better performance than existing solutions.
|
1910.07169
|
Lanlan Liu
|
Lanlan Liu, Michael Muelly, Jia Deng, Tomas Pfister, Li-Jia Li
|
Generative Modeling for Small-Data Object Detection
|
Published in ICCV 2019
| null | null | null |
cs.CV cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores object detection in the small data regime, where only a
limited number of annotated bounding boxes are available due to data rarity and
annotation expense. This is a common challenge today with machine learning
being applied to many new tasks where obtaining training data is more
challenging, e.g. in medical images with rare diseases that doctors sometimes
only see once in their life-time. In this work we explore this problem from a
generative modeling perspective by learning to generate new images with
associated bounding boxes, and using these for training an object detector. We
show that simply training previously proposed generative models does not yield
satisfactory performance due to them optimizing for image realism rather than
object detection accuracy. To this end we develop a new model with a novel
unrolling mechanism that jointly optimizes the generative model and a detector
such that the generated images improve the performance of the detector. We show
this method outperforms the state of the art on two challenging datasets,
disease detection and small data pedestrian detection, improving the average
precision on NIH Chest X-ray by a relative 20% and localization accuracy by a
relative 50%.
|
[
{
"created": "Wed, 16 Oct 2019 04:57:25 GMT",
"version": "v1"
}
] |
2019-10-17
|
[
[
"Liu",
"Lanlan",
""
],
[
"Muelly",
"Michael",
""
],
[
"Deng",
"Jia",
""
],
[
"Pfister",
"Tomas",
""
],
[
"Li",
"Li-Jia",
""
]
] |
This paper explores object detection in the small data regime, where only a limited number of annotated bounding boxes are available due to data rarity and annotation expense. This is a common challenge today with machine learning being applied to many new tasks where obtaining training data is more challenging, e.g. in medical images with rare diseases that doctors sometimes only see once in their life-time. In this work we explore this problem from a generative modeling perspective by learning to generate new images with associated bounding boxes, and using these for training an object detector. We show that simply training previously proposed generative models does not yield satisfactory performance due to them optimizing for image realism rather than object detection accuracy. To this end we develop a new model with a novel unrolling mechanism that jointly optimizes the generative model and a detector such that the generated images improve the performance of the detector. We show this method outperforms the state of the art on two challenging datasets, disease detection and small data pedestrian detection, improving the average precision on NIH Chest X-ray by a relative 20% and localization accuracy by a relative 50%.
|
1905.08022
|
Caifa Zhou
|
Caifa Zhou and Andreas Wieser
|
An iterative scheme for feature based positioning using a weighted
dissimilarity measure
|
18 pages, 9 figures, and 1 table
| null | null | null |
cs.LG stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an iterative scheme for feature-based positioning using a new
weighted dissimilarity measure with the goal of reducing the impact of large
errors among the measured or modeled features. The weights are computed from
the location-dependent standard deviations of the features and stored as part
of the reference fingerprint map (RFM). Spatial filtering and kernel smoothing
of the kinematically collected raw data allow efficiently estimating the
standard deviations during RFM generation. In the positioning stage, the
weights control the contribution of each feature to the dissimilarity measure,
which in turn quantifies the difference between the set of online measured
features and the fingerprints stored in the RFM. Features with little
variability contribute more to the estimated position than features with high
variability. Iterations are necessary because the variability depends on the
location, and the location is initially unknown when estimating the position.
Using real WiFi signal strength data from extended test measurements with
ground truth in an office building, we show that the standard deviations of
these features vary considerably within the region of interest and are neither
simple functions of the signal strength nor of the distances from the
corresponding access points. This is the motivation to include the empirical
standard deviations in the RFM. We then analyze the deviations of the estimated
positions with and without the location-dependent weighting. In the present
example the maximum radial positioning error from ground truth are reduced by
40% comparing to kNN without the weighted dissimilarity measure.
|
[
{
"created": "Mon, 20 May 2019 12:12:38 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2019 14:56:24 GMT",
"version": "v2"
}
] |
2019-05-31
|
[
[
"Zhou",
"Caifa",
""
],
[
"Wieser",
"Andreas",
""
]
] |
We propose an iterative scheme for feature-based positioning using a new weighted dissimilarity measure with the goal of reducing the impact of large errors among the measured or modeled features. The weights are computed from the location-dependent standard deviations of the features and stored as part of the reference fingerprint map (RFM). Spatial filtering and kernel smoothing of the kinematically collected raw data allow efficiently estimating the standard deviations during RFM generation. In the positioning stage, the weights control the contribution of each feature to the dissimilarity measure, which in turn quantifies the difference between the set of online measured features and the fingerprints stored in the RFM. Features with little variability contribute more to the estimated position than features with high variability. Iterations are necessary because the variability depends on the location, and the location is initially unknown when estimating the position. Using real WiFi signal strength data from extended test measurements with ground truth in an office building, we show that the standard deviations of these features vary considerably within the region of interest and are neither simple functions of the signal strength nor of the distances from the corresponding access points. This is the motivation to include the empirical standard deviations in the RFM. We then analyze the deviations of the estimated positions with and without the location-dependent weighting. In the present example the maximum radial positioning error from ground truth are reduced by 40% comparing to kNN without the weighted dissimilarity measure.
|
1409.5317
|
Scott MacLean
|
Scott MacLean and George Labahn
|
A Bayesian model for recognizing handwritten mathematical expressions
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing handwritten mathematics is a challenging classification problem,
requiring simultaneous identification of all the symbols comprising an input as
well as the complex two-dimensional relationships between symbols and
subexpressions. Because of the ambiguity present in handwritten input, it is
often unrealistic to hope for consistently perfect recognition accuracy. We
present a system which captures all recognizable interpretations of the input
and organizes them in a parse forest from which individual parse trees may be
extracted and reported. If the top-ranked interpretation is incorrect, the user
may request alternates and select the recognition result they desire. The tree
extraction step uses a novel probabilistic tree scoring strategy in which a
Bayesian network is constructed based on the structure of the input, and each
joint variable assignment corresponds to a different parse tree. Parse trees
are then reported in order of decreasing probability. Two accuracy evaluations
demonstrate that the resulting recognition system is more accurate than
previous versions (which used non-probabilistic methods) and other academic
math recognizers.
|
[
{
"created": "Thu, 18 Sep 2014 14:45:24 GMT",
"version": "v1"
}
] |
2014-09-19
|
[
[
"MacLean",
"Scott",
""
],
[
"Labahn",
"George",
""
]
] |
Recognizing handwritten mathematics is a challenging classification problem, requiring simultaneous identification of all the symbols comprising an input as well as the complex two-dimensional relationships between symbols and subexpressions. Because of the ambiguity present in handwritten input, it is often unrealistic to hope for consistently perfect recognition accuracy. We present a system which captures all recognizable interpretations of the input and organizes them in a parse forest from which individual parse trees may be extracted and reported. If the top-ranked interpretation is incorrect, the user may request alternates and select the recognition result they desire. The tree extraction step uses a novel probabilistic tree scoring strategy in which a Bayesian network is constructed based on the structure of the input, and each joint variable assignment corresponds to a different parse tree. Parse trees are then reported in order of decreasing probability. Two accuracy evaluations demonstrate that the resulting recognition system is more accurate than previous versions (which used non-probabilistic methods) and other academic math recognizers.
|
1405.3311
|
Ugo Dal Lago
|
Beniamino Accattoli, Ugo Dal Lago
|
Beta Reduction is Invariant, Indeed (Long Version)
|
29 pages
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
$\lambda$-calculus a reasonable machine? Is there a way to measure the
computational complexity of a $\lambda$-term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of $\lambda$-calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating $\lambda$-calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modelled
after linear logic and proof-nets and admitting a decomposition of
leftmost-outermost derivations with the desired property. Thus, the LSC is
invariant with respect to, say, random access machines. The second step is to
show that LSC is invariant with respect to the $\lambda$-calculus. The size
explosion problem seems to imply that this is not possible: having the same
notions of normal form, evaluation in the LSC is exponentially longer than in
the $\lambda$-calculus. We solve such an impasse by introducing a new form of
shared normal form and shared reduction, deemed useful. Useful evaluation
avoids those steps that only unshare the output without contributing to
$\beta$-redexes, i.e., the steps that cause the blow-up in size.
|
[
{
"created": "Tue, 13 May 2014 21:23:58 GMT",
"version": "v1"
}
] |
2014-05-15
|
[
[
"Accattoli",
"Beniamino",
""
],
[
"Lago",
"Ugo Dal",
""
]
] |
Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is $\lambda$-calculus a reasonable machine? Is there a way to measure the computational complexity of a $\lambda$-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of $\lambda$-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating $\lambda$-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic and proof-nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the $\lambda$-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the $\lambda$-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to $\beta$-redexes, i.e., the steps that cause the blow-up in size.
|
1709.01710
|
Marina Ljubenovi\'c
|
Marina Ljubenovi\'c and M\'ario A. T. Figueiredo
|
Blind image deblurring using class-adapted image priors
|
5 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind image deblurring (BID) is an ill-posed inverse problem, usually
addressed by imposing prior knowledge on the (unknown) image and on the
blurring filter. Most of the work on BID has focused on natural images, using
image priors based on statistical properties of generic natural images.
However, in many applications, it is known that the image being recovered
belongs to some specific class (e.g., text, face, fingerprints), and exploiting
this knowledge allows obtaining more accurate priors. In this work, we propose
a method where a Gaussian mixture model (GMM) is used to learn a class-adapted
prior, by training on a dataset of clean images of that class. Experiments show
the competitiveness of the proposed method in terms of restoration quality when
dealing with images containing text, faces, or fingerprints. Additionally,
experiments show that the proposed method is able to handle text images at high
noise levels, outperforming state-of-the-art methods specifically designed for
BID of text images.
|
[
{
"created": "Wed, 6 Sep 2017 08:20:10 GMT",
"version": "v1"
}
] |
2017-09-07
|
[
[
"Ljubenović",
"Marina",
""
],
[
"Figueiredo",
"Mário A. T.",
""
]
] |
Blind image deblurring (BID) is an ill-posed inverse problem, usually addressed by imposing prior knowledge on the (unknown) image and on the blurring filter. Most of the work on BID has focused on natural images, using image priors based on statistical properties of generic natural images. However, in many applications, it is known that the image being recovered belongs to some specific class (e.g., text, face, fingerprints), and exploiting this knowledge allows obtaining more accurate priors. In this work, we propose a method where a Gaussian mixture model (GMM) is used to learn a class-adapted prior, by training on a dataset of clean images of that class. Experiments show the competitiveness of the proposed method in terms of restoration quality when dealing with images containing text, faces, or fingerprints. Additionally, experiments show that the proposed method is able to handle text images at high noise levels, outperforming state-of-the-art methods specifically designed for BID of text images.
|
2203.11092
|
Hang Dong
|
Hang Dong, Mat\'u\v{s} Falis, William Whiteley, Beatrice Alex, Joshua
Matterson, Shaoxiong Ji, Jiaoyan Chen, Honghan Wu
|
Automated Clinical Coding: What, Why, and Where We Are?
|
accepted for npj Digital Medicine
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Clinical coding is the task of transforming medical information in a
patient's health records into structured codes so that they can be used for
statistical analysis. This is a cognitive and time-consuming task that follows
a standard process in order to achieve a high level of consistency. Clinical
coding could potentially be supported by an automated system to improve the
efficiency and accuracy of the process. We introduce the idea of automated
clinical coding and summarise its challenges from the perspective of Artificial
Intelligence (AI) and Natural Language Processing (NLP), based on the
literature, our project experience over the past two and half years (late 2019
- early 2022), and discussions with clinical coding experts in Scotland and the
UK. Our research reveals the gaps between the current deep learning-based
approach applied to clinical coding and the need for explainability and
consistency in real-world practice. Knowledge-based methods that represent and
reason the standard, explainable process of a task may need to be incorporated
into deep learning-based methods for clinical coding. Automated clinical coding
is a promising task for AI, despite the technical and organisational
challenges. Coders are needed to be involved in the development process. There
is much to achieve to develop and deploy an AI-based automated system to
support coding in the next five years and beyond.
|
[
{
"created": "Mon, 21 Mar 2022 16:17:38 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Aug 2022 13:58:00 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Oct 2022 14:18:20 GMT",
"version": "v3"
}
] |
2022-10-11
|
[
[
"Dong",
"Hang",
""
],
[
"Falis",
"Matúš",
""
],
[
"Whiteley",
"William",
""
],
[
"Alex",
"Beatrice",
""
],
[
"Matterson",
"Joshua",
""
],
[
"Ji",
"Shaoxiong",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Wu",
"Honghan",
""
]
] |
Clinical coding is the task of transforming medical information in a patient's health records into structured codes so that they can be used for statistical analysis. This is a cognitive and time-consuming task that follows a standard process in order to achieve a high level of consistency. Clinical coding could potentially be supported by an automated system to improve the efficiency and accuracy of the process. We introduce the idea of automated clinical coding and summarise its challenges from the perspective of Artificial Intelligence (AI) and Natural Language Processing (NLP), based on the literature, our project experience over the past two and half years (late 2019 - early 2022), and discussions with clinical coding experts in Scotland and the UK. Our research reveals the gaps between the current deep learning-based approach applied to clinical coding and the need for explainability and consistency in real-world practice. Knowledge-based methods that represent and reason the standard, explainable process of a task may need to be incorporated into deep learning-based methods for clinical coding. Automated clinical coding is a promising task for AI, despite the technical and organisational challenges. Coders are needed to be involved in the development process. There is much to achieve to develop and deploy an AI-based automated system to support coding in the next five years and beyond.
|
2203.15425
|
Radek O\v{s}lej\v{s}ek
|
Martin Macak and Radek Oslejsek and Barbora Buhnova
|
Process Mining Analysis of Puzzle-Based Cybersecurity Training
| null | null |
10.1145/3502718.3524819
| null |
cs.CR cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The hands-on cybersecurity training quality is crucial to mitigate cyber
threats and attacks effectively. However, practical cybersecurity training is
strongly process-oriented, making the post-training analysis very difficult.
This paper presents process-mining methods applied to the learning analytics
workflow. We introduce a unified approach to reconstruct behavioral graphs from
sparse event logs of cyber ranges. Furthermore, we discuss significant data
features that affect their practical usability for educational process mining.
Based on that, methods of dealing with the complexity of process graphs are
presented, taking advantage of the puzzle-based gamification of in-class
training sessions.
|
[
{
"created": "Tue, 29 Mar 2022 10:45:05 GMT",
"version": "v1"
}
] |
2022-03-30
|
[
[
"Macak",
"Martin",
""
],
[
"Oslejsek",
"Radek",
""
],
[
"Buhnova",
"Barbora",
""
]
] |
The hands-on cybersecurity training quality is crucial to mitigate cyber threats and attacks effectively. However, practical cybersecurity training is strongly process-oriented, making the post-training analysis very difficult. This paper presents process-mining methods applied to the learning analytics workflow. We introduce a unified approach to reconstruct behavioral graphs from sparse event logs of cyber ranges. Furthermore, we discuss significant data features that affect their practical usability for educational process mining. Based on that, methods of dealing with the complexity of process graphs are presented, taking advantage of the puzzle-based gamification of in-class training sessions.
|
1710.10453
|
Avi Caciularu
|
Mor Cohen, Avi Caciularu, Idan Rejwan, Jonathan Berant
|
Inducing Regular Grammars Using Recurrent Neural Networks
|
Accepted to L&R 2018 workshop, ICML & IJCAI
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grammar induction is the task of learning a grammar from a set of examples.
Recently, neural networks have been shown to be powerful learning machines that
can identify patterns in streams of data. In this work we investigate their
effectiveness in inducing a regular grammar from data, without any assumptions
about the grammar. We train a recurrent neural network to distinguish between
strings that are in or outside a regular language, and utilize an algorithm for
extracting the learned finite-state automaton. We apply this method to several
regular languages and find unexpected results regarding the connections between
the network's states that may be regarded as evidence for generalization.
|
[
{
"created": "Sat, 28 Oct 2017 12:00:09 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jun 2018 14:27:47 GMT",
"version": "v2"
}
] |
2018-06-27
|
[
[
"Cohen",
"Mor",
""
],
[
"Caciularu",
"Avi",
""
],
[
"Rejwan",
"Idan",
""
],
[
"Berant",
"Jonathan",
""
]
] |
Grammar induction is the task of learning a grammar from a set of examples. Recently, neural networks have been shown to be powerful learning machines that can identify patterns in streams of data. In this work we investigate their effectiveness in inducing a regular grammar from data, without any assumptions about the grammar. We train a recurrent neural network to distinguish between strings that are in or outside a regular language, and utilize an algorithm for extracting the learned finite-state automaton. We apply this method to several regular languages and find unexpected results regarding the connections between the network's states that may be regarded as evidence for generalization.
|
1906.00114
|
Tom\'a\v{s} Musil
|
Tom\'a\v{s} Musil
|
Examining Structure of Word Embeddings with PCA
|
12 pages, 6 figures, accepted to The 22th International Conference of
Text, Speech and Dialogue (TSD2019) in Ljubljana
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we compare structure of Czech word embeddings for English-Czech
neural machine translation (NMT), word2vec and sentiment analysis. We show that
although it is possible to successfully predict part of speech (POS) tags from
word embeddings of word2vec and various translation models, not all of the
embedding spaces show the same structure. The information about POS is present
in word2vec embeddings, but the high degree of organization by POS in the NMT
decoder suggests that this information is more important for machine
translation and therefore the NMT model represents it in more direct way. Our
method is based on correlation of principal component analysis (PCA) dimensions
with categorical linguistic data. We also show that further examining
histograms of classes along the principal component is important to understand
the structure of representation of information in embeddings.
|
[
{
"created": "Fri, 31 May 2019 22:47:56 GMT",
"version": "v1"
}
] |
2019-06-04
|
[
[
"Musil",
"Tomáš",
""
]
] |
In this paper we compare structure of Czech word embeddings for English-Czech neural machine translation (NMT), word2vec and sentiment analysis. We show that although it is possible to successfully predict part of speech (POS) tags from word embeddings of word2vec and various translation models, not all of the embedding spaces show the same structure. The information about POS is present in word2vec embeddings, but the high degree of organization by POS in the NMT decoder suggests that this information is more important for machine translation and therefore the NMT model represents it in more direct way. Our method is based on correlation of principal component analysis (PCA) dimensions with categorical linguistic data. We also show that further examining histograms of classes along the principal component is important to understand the structure of representation of information in embeddings.
|
2009.12215
|
Chengwen Xing
|
Chengwen Xing, Shuai Wang, Sheng Chen, Shaodan Ma, H. Vincent Poor,
Lajos Hanzo
|
Matrix-Monotonic Optimization Part II: Multi-Variable Optimization
|
Final version published in IEEE Transactions on Signal Processing.
arXiv admin note: substantial text overlap with arXiv:1810.11244
| null |
10.1109/TSP.2020.3037495
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contrast to Part I of this treatise [1] that focuses on the optimization
problems associated with single matrix variables, in this paper, we investigate
the application of the matrix-monotonic optimization framework in the
optimization problems associated with multiple matrix variables. It is revealed
that matrix-monotonic optimization still works even for multiple matrix-variate
based optimization problems, provided that certain conditions are satisfied.
Using this framework, the optimal structures of the matrix variables can be
derived and the associated multiple matrix-variate optimization problems can be
substantially simplified. In this paper, several specific examples are given,
which are essentially open problems. Firstly, we investigate multi-user
multiple-input multiple-output (MU- MIMO) uplink communications under various
power constraints. Using the proposed framework, the optimal structures of the
precoding matrices at each user under various power constraints can be derived.
Secondly, we considered the optimization of the signal compression matrices at
each sensor under various power constraints in distributed sensor networks.
Finally, we investigate the transceiver optimization for multi-hop
amplify-and-forward (AF) MIMO relaying networks with imperfect channel state
information (CSI) under various power constraints. At the end of this paper,
several simulation results are given to demonstrate the accuracy of the
proposed theoretical results.
|
[
{
"created": "Thu, 24 Sep 2020 02:04:03 GMT",
"version": "v1"
}
] |
2021-02-24
|
[
[
"Xing",
"Chengwen",
""
],
[
"Wang",
"Shuai",
""
],
[
"Chen",
"Sheng",
""
],
[
"Ma",
"Shaodan",
""
],
[
"Poor",
"H. Vincent",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
In contrast to Part I of this treatise [1] that focuses on the optimization problems associated with single matrix variables, in this paper, we investigate the application of the matrix-monotonic optimization framework in the optimization problems associated with multiple matrix variables. It is revealed that matrix-monotonic optimization still works even for multiple matrix-variate based optimization problems, provided that certain conditions are satisfied. Using this framework, the optimal structures of the matrix variables can be derived and the associated multiple matrix-variate optimization problems can be substantially simplified. In this paper, several specific examples are given, which are essentially open problems. Firstly, we investigate multi-user multiple-input multiple-output (MU- MIMO) uplink communications under various power constraints. Using the proposed framework, the optimal structures of the precoding matrices at each user under various power constraints can be derived. Secondly, we considered the optimization of the signal compression matrices at each sensor under various power constraints in distributed sensor networks. Finally, we investigate the transceiver optimization for multi-hop amplify-and-forward (AF) MIMO relaying networks with imperfect channel state information (CSI) under various power constraints. At the end of this paper, several simulation results are given to demonstrate the accuracy of the proposed theoretical results.
|
2206.06518
|
Vandad Davoodnia
|
Vandad Davoodnia, Saeed Ghorbani, Ali Etemad
|
Estimating Pose from Pressure Data for Smart Beds with Deep Image-based
Pose Estimators
|
The version of record of this article, first published in Applied
Intelligence, is available online at Publisher's website
https://doi.org/10.1007/s10489-021-02418-y. arXiv admin note: substantial
text overlap with arXiv:1908.08919
|
Applied Intelligence (2021): 1-15
|
10.1007/s10489-021-02418-y
|
1573-7497
|
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In-bed pose estimation has shown value in fields such as hospital patient
monitoring, sleep studies, and smart homes. In this paper, we explore different
strategies for detecting body pose from highly ambiguous pressure data, with
the aid of pre-existing pose estimators. We examine the performance of
pre-trained pose estimators by using them either directly or by re-training
them on two pressure datasets. We also explore other strategies utilizing a
learnable pre-processing domain adaptation step, which transforms the vague
pressure maps to a representation closer to the expected input space of common
purpose pose estimation modules. Accordingly, we used a fully convolutional
network with multiple scales to provide the pose-specific characteristics of
the pressure maps to the pre-trained pose estimation module. Our complete
analysis of different approaches shows that the combination of learnable
pre-processing module along with re-training pre-existing image-based pose
estimators on the pressure data is able to overcome issues such as highly vague
pressure points to achieve very high pose estimation accuracy.
|
[
{
"created": "Mon, 13 Jun 2022 23:29:28 GMT",
"version": "v1"
}
] |
2022-06-15
|
[
[
"Davoodnia",
"Vandad",
""
],
[
"Ghorbani",
"Saeed",
""
],
[
"Etemad",
"Ali",
""
]
] |
In-bed pose estimation has shown value in fields such as hospital patient monitoring, sleep studies, and smart homes. In this paper, we explore different strategies for detecting body pose from highly ambiguous pressure data, with the aid of pre-existing pose estimators. We examine the performance of pre-trained pose estimators by using them either directly or by re-training them on two pressure datasets. We also explore other strategies utilizing a learnable pre-processing domain adaptation step, which transforms the vague pressure maps to a representation closer to the expected input space of common purpose pose estimation modules. Accordingly, we used a fully convolutional network with multiple scales to provide the pose-specific characteristics of the pressure maps to the pre-trained pose estimation module. Our complete analysis of different approaches shows that the combination of learnable pre-processing module along with re-training pre-existing image-based pose estimators on the pressure data is able to overcome issues such as highly vague pressure points to achieve very high pose estimation accuracy.
|
2306.04261
|
Fardad Vakilipoor
|
Fardad Vakilipoor, Luca Barletta, Stefano Bregni, and Maurizio
Magarini
|
Achievable Rate Analysis in Molecular Channels with Reset-Counting Fully
Absorbing Receivers
|
Submitted to IEEE Global Communications Conference, December 2023,
Kuala Lumpur, Malaysia
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we investigate the achievable rate of a diffusive Molecular
Communication (MC) channel with fully absorbing receiver, which counts
particles absorbed along each symbol interval and resets the counter at every
interval (reset-counting). The MC channel is affected by a memory effect and
thus inter-symbol interference (ISI), due to the delayed arrival of molecules.
To reduce complexity, our analysis is based on measuring the channel memory as
an integer number of symbol intervals and on a single-sample memoryless
detector. Thus, in our model the effect of released particles remains effective
for a limited number of symbol intervals. We optimize the detector threshold
for maximizing capacity, approximate as Gaussian the received signal
distribution, and calculate the channel mutual information affected by ISI, in
the case of binary concentration shift keying modulation. To the best of our
knowledge, in literature there are no previous investigations on the achievable
rate in this type of system. Our results demonstrate that, in general, the
optimal input probability distribution achieving the maximum achievable rate
may be not uniform. In particular, when the symbol interval is small (strong
ISI), the maximum achievable rate does not occur with equiprobable transmission
of bits.
|
[
{
"created": "Wed, 7 Jun 2023 08:59:39 GMT",
"version": "v1"
}
] |
2023-06-08
|
[
[
"Vakilipoor",
"Fardad",
""
],
[
"Barletta",
"Luca",
""
],
[
"Bregni",
"Stefano",
""
],
[
"Magarini",
"Maurizio",
""
]
] |
In this paper, we investigate the achievable rate of a diffusive Molecular Communication (MC) channel with fully absorbing receiver, which counts particles absorbed along each symbol interval and resets the counter at every interval (reset-counting). The MC channel is affected by a memory effect and thus inter-symbol interference (ISI), due to the delayed arrival of molecules. To reduce complexity, our analysis is based on measuring the channel memory as an integer number of symbol intervals and on a single-sample memoryless detector. Thus, in our model the effect of released particles remains effective for a limited number of symbol intervals. We optimize the detector threshold for maximizing capacity, approximate as Gaussian the received signal distribution, and calculate the channel mutual information affected by ISI, in the case of binary concentration shift keying modulation. To the best of our knowledge, in literature there are no previous investigations on the achievable rate in this type of system. Our results demonstrate that, in general, the optimal input probability distribution achieving the maximum achievable rate may be not uniform. In particular, when the symbol interval is small (strong ISI), the maximum achievable rate does not occur with equiprobable transmission of bits.
|
2105.13287
|
Dung Nguyen
|
Dung Nguyen and Anil Vullikanti
|
Differentially Private Densest Subgraph Detection
|
Accepted by ICML 2021
| null | null | null |
cs.DS cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Densest subgraph detection is a fundamental graph mining problem, with a
large number of applications. There has been a lot of work on efficient
algorithms for finding the densest subgraph in massive networks. However, in
many domains, the network is private, and returning a densest subgraph can
reveal information about the network. Differential privacy is a powerful
framework to handle such settings. We study the densest subgraph problem in the
edge privacy model, in which the edges of the graph are private. We present the
first sequential and parallel differentially private algorithms for this
problem. We show that our algorithms have an additive approximation guarantee.
We evaluate our algorithms on a large number of real-world networks, and
observe a good privacy-accuracy tradeoff when the network has high density.
|
[
{
"created": "Thu, 27 May 2021 16:36:03 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jun 2021 17:33:02 GMT",
"version": "v2"
}
] |
2024-06-05
|
[
[
"Nguyen",
"Dung",
""
],
[
"Vullikanti",
"Anil",
""
]
] |
Densest subgraph detection is a fundamental graph mining problem, with a large number of applications. There has been a lot of work on efficient algorithms for finding the densest subgraph in massive networks. However, in many domains, the network is private, and returning a densest subgraph can reveal information about the network. Differential privacy is a powerful framework to handle such settings. We study the densest subgraph problem in the edge privacy model, in which the edges of the graph are private. We present the first sequential and parallel differentially private algorithms for this problem. We show that our algorithms have an additive approximation guarantee. We evaluate our algorithms on a large number of real-world networks, and observe a good privacy-accuracy tradeoff when the network has high density.
|
1709.03787
|
Balazs Vedres
|
Balazs Vedres
|
Forbidden triads and Creative Success in Jazz: The Miles Davis Factor
| null |
Applied Network Science (2017) 2:31
|
10.1007/s41109-017-0051-2
| null |
cs.SI nlin.AO stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article argues for the importance of forbidden triads - open triads with
high-weight edges - in predicting success in creative fields. Forbidden triads
had been treated as a residual category beyond closed and open triads, yet I
argue that these structures provide opportunities to combine socially evolved
styles in new ways. Using data on the entire history of recorded jazz from 1896
to 2010, I show that observed collaborations have tolerated the openness of
high weight triads more than expected, observed jazz sessions had more
forbidden triads than expected, and the density of forbidden triads contributed
to the success of recording sessions, measured by the number of record releases
of session material. The article also shows that the sessions of Miles Davis
had received an especially high boost from forbidden triads.
|
[
{
"created": "Tue, 12 Sep 2017 11:28:25 GMT",
"version": "v1"
}
] |
2017-10-06
|
[
[
"Vedres",
"Balazs",
""
]
] |
This article argues for the importance of forbidden triads - open triads with high-weight edges - in predicting success in creative fields. Forbidden triads had been treated as a residual category beyond closed and open triads, yet I argue that these structures provide opportunities to combine socially evolved styles in new ways. Using data on the entire history of recorded jazz from 1896 to 2010, I show that observed collaborations have tolerated the openness of high weight triads more than expected, observed jazz sessions had more forbidden triads than expected, and the density of forbidden triads contributed to the success of recording sessions, measured by the number of record releases of session material. The article also shows that the sessions of Miles Davis had received an especially high boost from forbidden triads.
|
1804.07675
|
Christian H\"ager
|
Shen Li, Christian H\"ager, Nil Garcia, Henk Wymeersch
|
Achievable Information Rates for Nonlinear Fiber Communication via
End-to-end Autoencoder Learning
|
3 pages, 4 figures, fixed typos, revised layout
| null | null | null |
cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning is used to compute achievable information rates (AIRs) for a
simplified fiber channel. The approach jointly optimizes the input distribution
(constellation shaping) and the auxiliary channel distribution to compute AIRs
without explicit channel knowledge in an end-to-end fashion.
|
[
{
"created": "Fri, 20 Apr 2018 15:30:06 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Sep 2018 08:58:55 GMT",
"version": "v2"
}
] |
2018-09-18
|
[
[
"Li",
"Shen",
""
],
[
"Häger",
"Christian",
""
],
[
"Garcia",
"Nil",
""
],
[
"Wymeersch",
"Henk",
""
]
] |
Machine learning is used to compute achievable information rates (AIRs) for a simplified fiber channel. The approach jointly optimizes the input distribution (constellation shaping) and the auxiliary channel distribution to compute AIRs without explicit channel knowledge in an end-to-end fashion.
|
2205.14458
|
Longzhen Yang
|
Longzhen Yang, Yihang Liu, Yitao Peng, Lianghua He
|
Variational Transformer: A Framework Beyond the Trade-off between
Accuracy and Diversity for Image Captioning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accuracy and Diversity are two essential metrizable manifestations in
generating natural and semantically correct captions. Many efforts have been
made to enhance one of them with another decayed due to the trade-off gap. In
this work, we will show that the inferior standard of accuracy draws from human
annotations (leave-one-out) are not appropriate for machine-generated captions.
To improve diversity with a solid accuracy performance, we exploited a novel
Variational Transformer framework. By introducing the "Invisible Information
Prior" and the "Auto-selectable GMM", we instruct the encoder to learn the
precise language information and object relation in different scenes for
accuracy assurance. By introducing the "Range-Median Reward" baseline, we
retain more diverse candidates with higher rewards during the RL-based training
process for diversity assurance. Experiments show that our method achieves the
simultaneous promotion of accuracy (CIDEr) and diversity (self-CIDEr), up to
1.1 and 4.8 percent. Also, our method got the most similar performance of the
semantic retrieval compared to human annotations, with 50.3 (50.6 of human) for
R@1(i2t).
|
[
{
"created": "Sat, 28 May 2022 15:29:14 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Sep 2022 12:21:58 GMT",
"version": "v2"
}
] |
2022-09-22
|
[
[
"Yang",
"Longzhen",
""
],
[
"Liu",
"Yihang",
""
],
[
"Peng",
"Yitao",
""
],
[
"He",
"Lianghua",
""
]
] |
Accuracy and Diversity are two essential metrizable manifestations in generating natural and semantically correct captions. Many efforts have been made to enhance one of them with another decayed due to the trade-off gap. In this work, we will show that the inferior standard of accuracy draws from human annotations (leave-one-out) are not appropriate for machine-generated captions. To improve diversity with a solid accuracy performance, we exploited a novel Variational Transformer framework. By introducing the "Invisible Information Prior" and the "Auto-selectable GMM", we instruct the encoder to learn the precise language information and object relation in different scenes for accuracy assurance. By introducing the "Range-Median Reward" baseline, we retain more diverse candidates with higher rewards during the RL-based training process for diversity assurance. Experiments show that our method achieves the simultaneous promotion of accuracy (CIDEr) and diversity (self-CIDEr), up to 1.1 and 4.8 percent. Also, our method got the most similar performance of the semantic retrieval compared to human annotations, with 50.3 (50.6 of human) for R@1(i2t).
|
2103.06125
|
Lucas N. Ferreira
|
Lucas N. Ferreira, Jim Whitehead
|
Learning to Generate Music With Sentiment
|
International Society for Music Information Retrieval (2019)
| null | null | null |
cs.LG cs.IR cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Deep Learning models have shown very promising results in automatically
composing polyphonic music pieces. However, it is very hard to control such
models in order to guide the compositions towards a desired goal. We are
interested in controlling a model to automatically generate music with a given
sentiment. This paper presents a generative Deep Learning model that can be
directed to compose music with a given sentiment. Besides music generation, the
same model can be used for sentiment analysis of symbolic music. We evaluate
the accuracy of the model in classifying sentiment of symbolic music using a
new dataset of video game soundtracks. Results show that our model is able to
obtain good prediction accuracy. A user study shows that human subjects agreed
that the generated music has the intended sentiment, however negative pieces
can be ambiguous.
|
[
{
"created": "Tue, 9 Mar 2021 03:16:52 GMT",
"version": "v1"
}
] |
2021-03-11
|
[
[
"Ferreira",
"Lucas N.",
""
],
[
"Whitehead",
"Jim",
""
]
] |
Deep Learning models have shown very promising results in automatically composing polyphonic music pieces. However, it is very hard to control such models in order to guide the compositions towards a desired goal. We are interested in controlling a model to automatically generate music with a given sentiment. This paper presents a generative Deep Learning model that can be directed to compose music with a given sentiment. Besides music generation, the same model can be used for sentiment analysis of symbolic music. We evaluate the accuracy of the model in classifying sentiment of symbolic music using a new dataset of video game soundtracks. Results show that our model is able to obtain good prediction accuracy. A user study shows that human subjects agreed that the generated music has the intended sentiment, however negative pieces can be ambiguous.
|
2202.04067
|
Yedid Hoshen
|
Yedid Hoshen
|
Time Series Anomaly Detection by Cumulative Radon Features
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting anomalous time series is key for scientific, medical and industrial
tasks, but is challenging due to its inherent unsupervised nature. In recent
years, progress has been made on this task by learning increasingly more
complex features, often using deep neural networks. In this work, we argue that
shallow features suffice when combined with distribution distance measures. Our
approach models each time series as a high dimensional empirical distribution
of features, where each time-point constitutes a single sample. Modeling the
distance between a test time series and the normal training set therefore
requires efficiently measuring the distance between multivariate probability
distributions. We show that by parameterizing each time series using cumulative
Radon features, we are able to efficiently and effectively model the
distribution of normal time series. Our theoretically grounded but
simple-to-implement approach is evaluated on multiple datasets and shown to
achieve better results than established, classical methods as well as complex,
state-of-the-art deep learning methods. Code is provided.
|
[
{
"created": "Tue, 8 Feb 2022 18:58:53 GMT",
"version": "v1"
}
] |
2022-02-09
|
[
[
"Hoshen",
"Yedid",
""
]
] |
Detecting anomalous time series is key for scientific, medical and industrial tasks, but is challenging due to its inherent unsupervised nature. In recent years, progress has been made on this task by learning increasingly more complex features, often using deep neural networks. In this work, we argue that shallow features suffice when combined with distribution distance measures. Our approach models each time series as a high dimensional empirical distribution of features, where each time-point constitutes a single sample. Modeling the distance between a test time series and the normal training set therefore requires efficiently measuring the distance between multivariate probability distributions. We show that by parameterizing each time series using cumulative Radon features, we are able to efficiently and effectively model the distribution of normal time series. Our theoretically grounded but simple-to-implement approach is evaluated on multiple datasets and shown to achieve better results than established, classical methods as well as complex, state-of-the-art deep learning methods. Code is provided.
|
1802.01618
|
Imene Trigui
|
Imene Trigui, and Sofiene Affes
|
Unified Analysis and Optimization of D2D Communications in Cellular
Networks Over Fading Channels
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper develops an innovative approach to the modeling and analysis of
downlink cellular networks with device-to-device (D$2$D) transmissions. The
analytical embodiment of the signal-to-noise and-interference ratio (SINR)
analysis in general fading channels is unified due to the H-transform theory, a
taxonomy never considered before in stochastic geometry-based cellular network
modeling and analysis. The proposed framework has the potential, due to
versatility of the Fox's H functions, of significantly simplifying the
cumbersome analysis procedure and representation of D$2$D and cellular
coverage, while subsuming those previously derived for all the known simple and
composite fading models. By harnessing its tractability, the developed
statistical machinery is employed to launch an investigation into the optimal
design of coexisting D$2$D and cellular communications. We propose novel
coverage-aware power control combined with opportunistic access control to
maximize the area spectral efficiency (ASE) of D$2$D communications. Simulation
results substantiate performance gains achieved by the proposed optimization
framework in terms of cellular communication coverage probability, average
D$2$D transmit power, and the ASE of D$2$D communications under different
fading models and link- and network-level dynamics.
|
[
{
"created": "Mon, 5 Feb 2018 19:34:40 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Apr 2018 19:27:26 GMT",
"version": "v2"
}
] |
2018-04-16
|
[
[
"Trigui",
"Imene",
""
],
[
"Affes",
"Sofiene",
""
]
] |
This paper develops an innovative approach to the modeling and analysis of downlink cellular networks with device-to-device (D$2$D) transmissions. The analytical embodiment of the signal-to-noise and-interference ratio (SINR) analysis in general fading channels is unified due to the H-transform theory, a taxonomy never considered before in stochastic geometry-based cellular network modeling and analysis. The proposed framework has the potential, due to versatility of the Fox's H functions, of significantly simplifying the cumbersome analysis procedure and representation of D$2$D and cellular coverage, while subsuming those previously derived for all the known simple and composite fading models. By harnessing its tractability, the developed statistical machinery is employed to launch an investigation into the optimal design of coexisting D$2$D and cellular communications. We propose novel coverage-aware power control combined with opportunistic access control to maximize the area spectral efficiency (ASE) of D$2$D communications. Simulation results substantiate performance gains achieved by the proposed optimization framework in terms of cellular communication coverage probability, average D$2$D transmit power, and the ASE of D$2$D communications under different fading models and link- and network-level dynamics.
|
2008.11055
|
Luciano Oliveira
|
Gabriel Lefundes, Luciano Oliveira
|
On estimating gaze by self-attention augmented convolutions
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Estimation of 3D gaze is highly relevant to multiple fields, including but
not limited to interactive systems, specialized human-computer interfaces, and
behavioral research. Although recently deep learning methods have boosted the
accuracy of appearance-based gaze estimation, there is still room for
improvement in the network architectures for this particular task. Therefore we
propose here a novel network architecture grounded on self-attention augmented
convolutions to improve the quality of the learned features during the training
of a shallower residual network. The rationale is that self-attention mechanism
can help outperform deeper architectures by learning dependencies between
distant regions in full-face images. This mechanism can also create better and
more spatially-aware feature representations derived from the face and eye
images before gaze regression. We dubbed our framework ARes-gaze, which
explores our Attention-augmented ResNet (ARes-14) as twin convolutional
backbones. In our experiments, results showed a decrease of the average angular
error by 2.38% when compared to state-of-the-art methods on the MPIIFaceGaze
data set, and a second-place on the EyeDiap data set. It is noteworthy that our
proposed framework was the only one to reach high accuracy simultaneously on
both data sets.
|
[
{
"created": "Tue, 25 Aug 2020 14:29:05 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Nov 2020 13:49:19 GMT",
"version": "v2"
}
] |
2020-11-04
|
[
[
"Lefundes",
"Gabriel",
""
],
[
"Oliveira",
"Luciano",
""
]
] |
Estimation of 3D gaze is highly relevant to multiple fields, including but not limited to interactive systems, specialized human-computer interfaces, and behavioral research. Although recently deep learning methods have boosted the accuracy of appearance-based gaze estimation, there is still room for improvement in the network architectures for this particular task. Therefore we propose here a novel network architecture grounded on self-attention augmented convolutions to improve the quality of the learned features during the training of a shallower residual network. The rationale is that self-attention mechanism can help outperform deeper architectures by learning dependencies between distant regions in full-face images. This mechanism can also create better and more spatially-aware feature representations derived from the face and eye images before gaze regression. We dubbed our framework ARes-gaze, which explores our Attention-augmented ResNet (ARes-14) as twin convolutional backbones. In our experiments, results showed a decrease of the average angular error by 2.38% when compared to state-of-the-art methods on the MPIIFaceGaze data set, and a second-place on the EyeDiap data set. It is noteworthy that our proposed framework was the only one to reach high accuracy simultaneously on both data sets.
|
1809.00258
|
Yogatheesan Varatharajah
|
Yogatheesan Varatharajah, Brent Berry, Sanmi Koyejo, and Ravishankar
Iyer
|
A Contextual-bandit-based Approach for Informed Decision-making in
Clinical Trials
|
13 pages, 2 figures
| null | null | null |
cs.AI stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clinical trials involving multiple treatments utilize randomization of the
treatment assignments to enable the evaluation of treatment efficacies in an
unbiased manner. Such evaluation is performed in post hoc studies that usually
use supervised-learning methods that rely on large amounts of data collected in
a randomized fashion. That approach often proves to be suboptimal in that some
participants may suffer and even die as a result of having not received the
most appropriate treatments during the trial. Reinforcement-learning methods
improve the situation by making it possible to learn the treatment efficacies
dynamically during the course of the trial, and to adapt treatment assignments
accordingly. Recent efforts using \textit{multi-arm bandits}, a type of
reinforcement-learning methods, have focused on maximizing clinical outcomes
for a population that was assumed to be homogeneous. However, those approaches
have failed to account for the variability among participants that is becoming
increasingly evident as a result of recent clinical-trial-based studies. We
present a contextual-bandit-based online treatment optimization algorithm that,
in choosing treatments for new participants in the study, takes into account
not only the maximization of the clinical outcomes but also the patient
characteristics. We evaluated our algorithm using a real clinical trial dataset
from the International Stroke Trial. The results of our retrospective analysis
indicate that the proposed approach performs significantly better than either a
random assignment of treatments (the current gold standard) or a
multi-arm-bandit-based approach, providing substantial gains in the percentage
of participants who are assigned the most suitable treatments. The
contextual-bandit and multi-arm bandit approaches provide 72.63% and 64.34%
gains, respectively, compared to a random assignment.
|
[
{
"created": "Sat, 1 Sep 2018 22:07:23 GMT",
"version": "v1"
}
] |
2018-09-10
|
[
[
"Varatharajah",
"Yogatheesan",
""
],
[
"Berry",
"Brent",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Iyer",
"Ravishankar",
""
]
] |
Clinical trials involving multiple treatments utilize randomization of the treatment assignments to enable the evaluation of treatment efficacies in an unbiased manner. Such evaluation is performed in post hoc studies that usually use supervised-learning methods that rely on large amounts of data collected in a randomized fashion. That approach often proves to be suboptimal in that some participants may suffer and even die as a result of having not received the most appropriate treatments during the trial. Reinforcement-learning methods improve the situation by making it possible to learn the treatment efficacies dynamically during the course of the trial, and to adapt treatment assignments accordingly. Recent efforts using \textit{multi-arm bandits}, a type of reinforcement-learning methods, have focused on maximizing clinical outcomes for a population that was assumed to be homogeneous. However, those approaches have failed to account for the variability among participants that is becoming increasingly evident as a result of recent clinical-trial-based studies. We present a contextual-bandit-based online treatment optimization algorithm that, in choosing treatments for new participants in the study, takes into account not only the maximization of the clinical outcomes but also the patient characteristics. We evaluated our algorithm using a real clinical trial dataset from the International Stroke Trial. The results of our retrospective analysis indicate that the proposed approach performs significantly better than either a random assignment of treatments (the current gold standard) or a multi-arm-bandit-based approach, providing substantial gains in the percentage of participants who are assigned the most suitable treatments. The contextual-bandit and multi-arm bandit approaches provide 72.63% and 64.34% gains, respectively, compared to a random assignment.
|
2204.11138
|
Su Jiang
|
Su Jiang, Louis J. Durlofsky
|
Use of Multifidelity Training Data and Transfer Learning for Efficient
Construction of Subsurface Flow Surrogate Models
| null | null |
10.1016/j.jcp.2022.111800
| null |
cs.LG physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data assimilation presents computational challenges because many
high-fidelity models must be simulated. Various deep-learning-based surrogate
modeling techniques have been developed to reduce the simulation costs
associated with these applications. However, to construct data-driven surrogate
models, several thousand high-fidelity simulation runs may be required to
provide training samples, and these computations can make training
prohibitively expensive. To address this issue, in this work we present a
framework where most of the training simulations are performed on coarsened
geomodels. These models are constructed using a flow-based upscaling method.
The framework entails the use of a transfer-learning procedure, incorporated
within an existing recurrent residual U-Net architecture, in which network
training is accomplished in three steps. In the first step. where the bulk of
the training is performed, only low-fidelity simulation results are used. The
second and third steps, in which the output layer is trained and the overall
network is fine-tuned, require a relatively small number of high-fidelity
simulations. Here we use 2500 low-fidelity runs and 200 high-fidelity runs,
which leads to about a 90% reduction in training simulation costs. The method
is applied for two-phase subsurface flow in 3D channelized systems, with flow
driven by wells. The surrogate model trained with multifidelity data is shown
to be nearly as accurate as a reference surrogate trained with only
high-fidelity data in predicting dynamic pressure and saturation fields in new
geomodels. Importantly, the network provides results that are significantly
more accurate than the low-fidelity simulations used for most of the training.
The multifidelity surrogate is also applied for history matching using an
ensemble-based procedure, where accuracy relative to reference results is again
demonstrated.
|
[
{
"created": "Sat, 23 Apr 2022 20:09:49 GMT",
"version": "v1"
}
] |
2022-12-28
|
[
[
"Jiang",
"Su",
""
],
[
"Durlofsky",
"Louis J.",
""
]
] |
Data assimilation presents computational challenges because many high-fidelity models must be simulated. Various deep-learning-based surrogate modeling techniques have been developed to reduce the simulation costs associated with these applications. However, to construct data-driven surrogate models, several thousand high-fidelity simulation runs may be required to provide training samples, and these computations can make training prohibitively expensive. To address this issue, in this work we present a framework where most of the training simulations are performed on coarsened geomodels. These models are constructed using a flow-based upscaling method. The framework entails the use of a transfer-learning procedure, incorporated within an existing recurrent residual U-Net architecture, in which network training is accomplished in three steps. In the first step. where the bulk of the training is performed, only low-fidelity simulation results are used. The second and third steps, in which the output layer is trained and the overall network is fine-tuned, require a relatively small number of high-fidelity simulations. Here we use 2500 low-fidelity runs and 200 high-fidelity runs, which leads to about a 90% reduction in training simulation costs. The method is applied for two-phase subsurface flow in 3D channelized systems, with flow driven by wells. The surrogate model trained with multifidelity data is shown to be nearly as accurate as a reference surrogate trained with only high-fidelity data in predicting dynamic pressure and saturation fields in new geomodels. Importantly, the network provides results that are significantly more accurate than the low-fidelity simulations used for most of the training. The multifidelity surrogate is also applied for history matching using an ensemble-based procedure, where accuracy relative to reference results is again demonstrated.
|
1509.06084
|
EPTCS
|
J. Strother Moore (Department of Computer Science, The University of
Texas at Austin)
|
Stateman: Using Metafunctions to Manage Large Terms Representing Machine
States
|
In Proceedings ACL2 2015, arXiv:1509.05526
|
EPTCS 192, 2015, pp. 93-109
|
10.4204/EPTCS.192.8
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When ACL2 is used to model the operational semantics of computing machines,
machine states are typically represented by terms recording the contents of the
state components. When models are realistic and are stepped through thousands
of machine cycles, these terms can grow quite large and the cost of simplifying
them on each step grows. In this paper we describe an ACL2 book that uses HIDE
and metafunctions to facilitate the management of large terms representing such
states. Because the metafunctions for each state component updater are solely
responsible for creating state expressions (i.e., "writing") and the
metafunctions for each state component accessor are solely responsible for
extracting values (i.e., "reading") from such state expressions, they can
maintain their own normal form, use HIDE to prevent other parts of ACL2 from
inspecting them, and use honsing to uniquely represent state expressions. The
last feature makes it possible to memoize the metafunctions, which can improve
proof performance in some machine models. This paper describes a
general-purpose ACL2 book modeling a byte-addressed memory supporting "mixed"
reads and writes. By "mixed" we mean that reads need not correspond (in address
or number of bytes) with writes. Verified metafunctions simplify such
"read-over-write" expressions while hiding the potentially large state
expression. A key utility is a function that determines an upper bound on the
value of a symbolic arithmetic expression, which plays a role in resolving
writes to addresses given by symbolic expressions. We also report on a
preliminary experiment with the book, which involves the production of states
containing several million function calls.
|
[
{
"created": "Mon, 21 Sep 2015 00:35:40 GMT",
"version": "v1"
}
] |
2015-09-22
|
[
[
"Moore",
"J. Strother",
"",
"Department of Computer Science, The University of\n Texas at Austin"
]
] |
When ACL2 is used to model the operational semantics of computing machines, machine states are typically represented by terms recording the contents of the state components. When models are realistic and are stepped through thousands of machine cycles, these terms can grow quite large and the cost of simplifying them on each step grows. In this paper we describe an ACL2 book that uses HIDE and metafunctions to facilitate the management of large terms representing such states. Because the metafunctions for each state component updater are solely responsible for creating state expressions (i.e., "writing") and the metafunctions for each state component accessor are solely responsible for extracting values (i.e., "reading") from such state expressions, they can maintain their own normal form, use HIDE to prevent other parts of ACL2 from inspecting them, and use honsing to uniquely represent state expressions. The last feature makes it possible to memoize the metafunctions, which can improve proof performance in some machine models. This paper describes a general-purpose ACL2 book modeling a byte-addressed memory supporting "mixed" reads and writes. By "mixed" we mean that reads need not correspond (in address or number of bytes) with writes. Verified metafunctions simplify such "read-over-write" expressions while hiding the potentially large state expression. A key utility is a function that determines an upper bound on the value of a symbolic arithmetic expression, which plays a role in resolving writes to addresses given by symbolic expressions. We also report on a preliminary experiment with the book, which involves the production of states containing several million function calls.
|
2404.02152
|
Chong Bao
|
Chong Bao, Yinda Zhang, Yuan Li, Xiyu Zhang, Bangbang Yang, Hujun Bao,
Marc Pollefeys, Guofeng Zhang, Zhaopeng Cui
|
GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from
a Single Image
|
Accepted to CVPR 2024. Project page:
https://zju3dv.github.io/geneavatar/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, we have witnessed the explosive growth of various volumetric
representations in modeling animatable head avatars. However, due to the
diversity of frameworks, there is no practical method to support high-level
applications like 3D head avatar editing across different representations. In
this paper, we propose a generic avatar editing approach that can be
universally applied to various 3DMM driving volumetric head avatars. To achieve
this goal, we design a novel expression-aware modification generative model,
which enables lift 2D editing from a single image to a consistent 3D
modification field. To ensure the effectiveness of the generative modification
process, we develop several techniques, including an expression-dependent
modification distillation scheme to draw knowledge from the large-scale head
avatar model and 2D facial texture editing tools, implicit latent space
guidance to enhance model convergence, and a segmentation-based loss reweight
strategy for fine-grained texture inversion. Extensive experiments demonstrate
that our method delivers high-quality and consistent results across multiple
expression and viewpoints. Project page: https://zju3dv.github.io/geneavatar/
|
[
{
"created": "Tue, 2 Apr 2024 17:58:35 GMT",
"version": "v1"
}
] |
2024-04-03
|
[
[
"Bao",
"Chong",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Li",
"Yuan",
""
],
[
"Zhang",
"Xiyu",
""
],
[
"Yang",
"Bangbang",
""
],
[
"Bao",
"Hujun",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Cui",
"Zhaopeng",
""
]
] |
Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars. However, due to the diversity of frameworks, there is no practical method to support high-level applications like 3D head avatar editing across different representations. In this paper, we propose a generic avatar editing approach that can be universally applied to various 3DMM driving volumetric head avatars. To achieve this goal, we design a novel expression-aware modification generative model, which enables lift 2D editing from a single image to a consistent 3D modification field. To ensure the effectiveness of the generative modification process, we develop several techniques, including an expression-dependent modification distillation scheme to draw knowledge from the large-scale head avatar model and 2D facial texture editing tools, implicit latent space guidance to enhance model convergence, and a segmentation-based loss reweight strategy for fine-grained texture inversion. Extensive experiments demonstrate that our method delivers high-quality and consistent results across multiple expression and viewpoints. Project page: https://zju3dv.github.io/geneavatar/
|
1910.02655
|
Amir Soleimani
|
Amir Soleimani, Christof Monz, Marcel Worring
|
BERT for Evidence Retrieval and Claim Verification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the promising performance of pre-trained language models, we
investigate BERT in an evidence retrieval and claim verification pipeline for
the FEVER fact extraction and verification challenge. To this end, we propose
to use two BERT models, one for retrieving potential evidence sentences
supporting or rejecting claims, and another for verifying claims based on the
predicted evidence sets. To train the BERT retrieval system, we use pointwise
and pairwise loss functions, and examine the effect of hard negative mining. A
second BERT model is trained to classify the samples as supported, refuted, and
not enough information. Our system achieves a new state of the art recall of
87.1 for retrieving top five sentences out of the FEVER documents consisting of
50K Wikipedia pages, and scores second in the official leaderboard with the
FEVER score of 69.7.
|
[
{
"created": "Mon, 7 Oct 2019 07:58:26 GMT",
"version": "v1"
}
] |
2019-10-08
|
[
[
"Soleimani",
"Amir",
""
],
[
"Monz",
"Christof",
""
],
[
"Worring",
"Marcel",
""
]
] |
Motivated by the promising performance of pre-trained language models, we investigate BERT in an evidence retrieval and claim verification pipeline for the FEVER fact extraction and verification challenge. To this end, we propose to use two BERT models, one for retrieving potential evidence sentences supporting or rejecting claims, and another for verifying claims based on the predicted evidence sets. To train the BERT retrieval system, we use pointwise and pairwise loss functions, and examine the effect of hard negative mining. A second BERT model is trained to classify the samples as supported, refuted, and not enough information. Our system achieves a new state of the art recall of 87.1 for retrieving top five sentences out of the FEVER documents consisting of 50K Wikipedia pages, and scores second in the official leaderboard with the FEVER score of 69.7.
|
2203.15448
|
H\"armel Nestra
|
Dan Bogdanov (1), Joosep J\"a\"ager (1), Peeter Laud (1), H\"armel
Nestra (1), Martin Pettai (1), Jaak Randmets (1), Ville Sokk (1), Kert Tali
(1), Sandhra-Mirella Valdma (1) ((1) Cybernetica AS)
|
ZK-SecreC: a Domain-Specific Language for Zero Knowledge Proofs
|
75 pp
| null | null | null |
cs.PL cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ZK-SecreC, a domain-specific language for zero-knowledge proofs.
We present the rationale for its design, its syntax and semantics, and
demonstrate its usefulness on the basis of a number of non-trivial examples.
The design features a type system, where each piece of data is assigned both a
confidentiality and an integrity type, which are not orthogonal to each other.
We perform an empiric evaluation of the statements produced by its compiler in
terms of their size. We also show the integration of the compiler with the
implementation of a zero-knowledge proof technique, and evaluate the running
time of both Prover and Verifier.
|
[
{
"created": "Tue, 29 Mar 2022 11:35:11 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Aug 2022 13:43:41 GMT",
"version": "v2"
}
] |
2022-08-29
|
[
[
"Bogdanov",
"Dan",
"",
"Cybernetica AS"
],
[
"Jääger",
"Joosep",
"",
"Cybernetica AS"
],
[
"Laud",
"Peeter",
"",
"Cybernetica AS"
],
[
"Nestra",
"Härmel",
"",
"Cybernetica AS"
],
[
"Pettai",
"Martin",
"",
"Cybernetica AS"
],
[
"Randmets",
"Jaak",
"",
"Cybernetica AS"
],
[
"Sokk",
"Ville",
"",
"Cybernetica AS"
],
[
"Tali",
"Kert",
"",
"Cybernetica AS"
],
[
"Valdma",
"Sandhra-Mirella",
"",
"Cybernetica AS"
]
] |
We present ZK-SecreC, a domain-specific language for zero-knowledge proofs. We present the rationale for its design, its syntax and semantics, and demonstrate its usefulness on the basis of a number of non-trivial examples. The design features a type system, where each piece of data is assigned both a confidentiality and an integrity type, which are not orthogonal to each other. We perform an empiric evaluation of the statements produced by its compiler in terms of their size. We also show the integration of the compiler with the implementation of a zero-knowledge proof technique, and evaluate the running time of both Prover and Verifier.
|
2107.14297
|
Enrico Ubaldi
|
Enrico Ubaldi, Takahiro Yabe, Nicholas K. W. Jones, Maham Faisal Khan,
Satish V. Ukkusuri, Riccardo Di Clemente, Emanuele Strano
|
Mobilkit: A Python Toolkit for Urban Resilience and Disaster Risk
Management Analytics using High Frequency Human Mobility Data
|
3 pages, 1 figure, KDD KDD Workshop on Data-driven Humanitarian
Mapping, 27th ACM SIGKDD Conference
|
Journal of Open Source Software, 9(95), 5201, 2024
|
10.21105/joss.05201
| null |
cs.CY cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Increasingly available high-frequency location datasets derived from
smartphones provide unprecedented insight into trajectories of human mobility.
These datasets can play a significant and growing role in informing
preparedness and response to natural disasters. However, limited tools exist to
enable rapid analytics using mobility data, and tend not to be tailored
specifically for disaster risk management. We present an open-source,
Python-based toolkit designed to conduct replicable and scalable post-disaster
analytics using GPS location data. Privacy, system capabilities, and potential
expansions of \textit{Mobilkit} are discussed.
|
[
{
"created": "Thu, 29 Jul 2021 19:49:54 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Sep 2021 08:54:13 GMT",
"version": "v2"
}
] |
2024-03-05
|
[
[
"Ubaldi",
"Enrico",
""
],
[
"Yabe",
"Takahiro",
""
],
[
"Jones",
"Nicholas K. W.",
""
],
[
"Khan",
"Maham Faisal",
""
],
[
"Ukkusuri",
"Satish V.",
""
],
[
"Di Clemente",
"Riccardo",
""
],
[
"Strano",
"Emanuele",
""
]
] |
Increasingly available high-frequency location datasets derived from smartphones provide unprecedented insight into trajectories of human mobility. These datasets can play a significant and growing role in informing preparedness and response to natural disasters. However, limited tools exist to enable rapid analytics using mobility data, and tend not to be tailored specifically for disaster risk management. We present an open-source, Python-based toolkit designed to conduct replicable and scalable post-disaster analytics using GPS location data. Privacy, system capabilities, and potential expansions of \textit{Mobilkit} are discussed.
|
2404.16223
|
Marcos V. Conde
|
Marcos V. Conde and Florin-Alexandru Vasluianu and Radu Timofte and
Jianxing Zhang and Jia Li and Fan Wang and Xiaopeng Li and Zikun Liu and
Hyunhee Park and Sejun Song and Changho Kim and Zhijuan Huang and Hongyuan Yu
and Cheng Wan and Wending Xiang and Jiamin Lin and Hang Zhong and Qiaosong
Zhang and Yue Sun and Xuanwu Yin and Kunlong Zuo and Senyan Xu and Siyuan
Jiang and Zhijing Sun and Jiaying Zhu and Liangyan Li and Ke Chen and Yunzhe
Li and Yimo Ning and Guanhua Zhao and Jun Chen and Jinyang Yu and Kele Xu and
Qisheng Xu and Yong Dou
|
Deep RAW Image Super-Resolution. A NTIRE 2024 Challenge Survey
|
CVPR 2024 - NTIRE Workshop
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper reviews the NTIRE 2024 RAW Image Super-Resolution Challenge,
highlighting the proposed solutions and results. New methods for RAW
Super-Resolution could be essential in modern Image Signal Processing (ISP)
pipelines, however, this problem is not as explored as in the RGB domain. Th
goal of this challenge is to upscale RAW Bayer images by 2x, considering
unknown degradations such as noise and blur. In the challenge, a total of 230
participants registered, and 45 submitted results during thee challenge period.
The performance of the top-5 submissions is reviewed and provided here as a
gauge for the current state-of-the-art in RAW Image Super-Resolution.
|
[
{
"created": "Wed, 24 Apr 2024 21:51:01 GMT",
"version": "v1"
}
] |
2024-04-26
|
[
[
"Conde",
"Marcos V.",
""
],
[
"Vasluianu",
"Florin-Alexandru",
""
],
[
"Timofte",
"Radu",
""
],
[
"Zhang",
"Jianxing",
""
],
[
"Li",
"Jia",
""
],
[
"Wang",
"Fan",
""
],
[
"Li",
"Xiaopeng",
""
],
[
"Liu",
"Zikun",
""
],
[
"Park",
"Hyunhee",
""
],
[
"Song",
"Sejun",
""
],
[
"Kim",
"Changho",
""
],
[
"Huang",
"Zhijuan",
""
],
[
"Yu",
"Hongyuan",
""
],
[
"Wan",
"Cheng",
""
],
[
"Xiang",
"Wending",
""
],
[
"Lin",
"Jiamin",
""
],
[
"Zhong",
"Hang",
""
],
[
"Zhang",
"Qiaosong",
""
],
[
"Sun",
"Yue",
""
],
[
"Yin",
"Xuanwu",
""
],
[
"Zuo",
"Kunlong",
""
],
[
"Xu",
"Senyan",
""
],
[
"Jiang",
"Siyuan",
""
],
[
"Sun",
"Zhijing",
""
],
[
"Zhu",
"Jiaying",
""
],
[
"Li",
"Liangyan",
""
],
[
"Chen",
"Ke",
""
],
[
"Li",
"Yunzhe",
""
],
[
"Ning",
"Yimo",
""
],
[
"Zhao",
"Guanhua",
""
],
[
"Chen",
"Jun",
""
],
[
"Yu",
"Jinyang",
""
],
[
"Xu",
"Kele",
""
],
[
"Xu",
"Qisheng",
""
],
[
"Dou",
"Yong",
""
]
] |
This paper reviews the NTIRE 2024 RAW Image Super-Resolution Challenge, highlighting the proposed solutions and results. New methods for RAW Super-Resolution could be essential in modern Image Signal Processing (ISP) pipelines, however, this problem is not as explored as in the RGB domain. Th goal of this challenge is to upscale RAW Bayer images by 2x, considering unknown degradations such as noise and blur. In the challenge, a total of 230 participants registered, and 45 submitted results during thee challenge period. The performance of the top-5 submissions is reviewed and provided here as a gauge for the current state-of-the-art in RAW Image Super-Resolution.
|
2011.11305
|
Ioannis Apostolopoulos
|
Ioannis D. Apostolopoulos, Mpesiana Tzani
|
Industrial object, machine part and defect recognition towards fully
automated industrial monitoring employing deep learning. The case of
multilevel VGG19
|
17 pages, 10 figures
|
Journal of Ambient Intelligence and Humanized Computing, 2022
|
10.1007/s12652-021-03688-7
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern industry requires modern solutions for monitoring the automatic
production of goods. Smart monitoring of the functionality of the mechanical
parts of technology systems or machines is mandatory for a fully automatic
production process. Although Deep Learning has been advancing, allowing for
real-time object detection and other tasks, little has been investigated about
the effectiveness of specially designed Convolutional Neural Networks for
defect detection and industrial object recognition. In the particular study, we
employed six publically available industrial-related datasets containing defect
materials and industrial tools or engine parts, aiming to develop a specialized
model for pattern recognition. Motivated by the recent success of the Virtual
Geometry Group (VGG) network, we propose a modified version of it, called
Multipath VGG19, which allows for more local and global feature extraction,
while the extra features are fused via concatenation. The experiments verified
the effectiveness of MVGG19 over the traditional VGG19. Specifically, top
classification performance was achieved in five of the six image datasets,
while the average classification improvement was 6.95%.
|
[
{
"created": "Mon, 23 Nov 2020 10:05:50 GMT",
"version": "v1"
}
] |
2022-01-11
|
[
[
"Apostolopoulos",
"Ioannis D.",
""
],
[
"Tzani",
"Mpesiana",
""
]
] |
Modern industry requires modern solutions for monitoring the automatic production of goods. Smart monitoring of the functionality of the mechanical parts of technology systems or machines is mandatory for a fully automatic production process. Although Deep Learning has been advancing, allowing for real-time object detection and other tasks, little has been investigated about the effectiveness of specially designed Convolutional Neural Networks for defect detection and industrial object recognition. In the particular study, we employed six publically available industrial-related datasets containing defect materials and industrial tools or engine parts, aiming to develop a specialized model for pattern recognition. Motivated by the recent success of the Virtual Geometry Group (VGG) network, we propose a modified version of it, called Multipath VGG19, which allows for more local and global feature extraction, while the extra features are fused via concatenation. The experiments verified the effectiveness of MVGG19 over the traditional VGG19. Specifically, top classification performance was achieved in five of the six image datasets, while the average classification improvement was 6.95%.
|
2302.11985
|
Shin Hwei Tan
|
Hsu Myat Win, Haibo Wang, Shin Hwei Tan
|
Automatic Detecting Unethical Behavior in Open-source Software Projects
|
11 pages
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Given the rapid growth of Open-Source Software (OSS) projects, ethical
considerations are becoming more important. Past studies focused on specific
ethical issues (e.g., gender bias and fairness in OSS). There is little to no
study on the different types of unethical behavior in OSS projects. We present
the first study of unethical behavior in OSS projects from the stakeholders'
perspective. Our study of 316 GitHub issues provides a taxonomy of 15 types of
unethical behavior guided by six ethical principles (e.g., autonomy).Examples
of new unethical behavior include soft forking (copying a repository without
forking) and self-promotion (promoting a repository without self-identifying as
contributor to the repository). We also identify 18 types of software artifacts
affected by the unethical behavior. The diverse types of unethical behavior
identified in our study (1) call for attentions of developers and researchers
when making contributions in GitHub, and (2) point to future research on
automated detection of unethical behavior in OSS projects. Based on our study,
we propose Etor, an approach that can automatically detect six types of
unethical behavior by using ontological engineering and Semantic Web Rule
Language (SWRL) rules to model GitHub attributes and software artifacts. Our
evaluation on 195,621 GitHub issues (1,765 GitHub repositories) shows that Etor
can automatically detect 548 unethical behavior with 74.8% average true
positive rate. This shows the feasibility of automated detection of unethical
behavior in OSS projects.
|
[
{
"created": "Thu, 23 Feb 2023 13:05:25 GMT",
"version": "v1"
}
] |
2023-02-24
|
[
[
"Win",
"Hsu Myat",
""
],
[
"Wang",
"Haibo",
""
],
[
"Tan",
"Shin Hwei",
""
]
] |
Given the rapid growth of Open-Source Software (OSS) projects, ethical considerations are becoming more important. Past studies focused on specific ethical issues (e.g., gender bias and fairness in OSS). There is little to no study on the different types of unethical behavior in OSS projects. We present the first study of unethical behavior in OSS projects from the stakeholders' perspective. Our study of 316 GitHub issues provides a taxonomy of 15 types of unethical behavior guided by six ethical principles (e.g., autonomy).Examples of new unethical behavior include soft forking (copying a repository without forking) and self-promotion (promoting a repository without self-identifying as contributor to the repository). We also identify 18 types of software artifacts affected by the unethical behavior. The diverse types of unethical behavior identified in our study (1) call for attentions of developers and researchers when making contributions in GitHub, and (2) point to future research on automated detection of unethical behavior in OSS projects. Based on our study, we propose Etor, an approach that can automatically detect six types of unethical behavior by using ontological engineering and Semantic Web Rule Language (SWRL) rules to model GitHub attributes and software artifacts. Our evaluation on 195,621 GitHub issues (1,765 GitHub repositories) shows that Etor can automatically detect 548 unethical behavior with 74.8% average true positive rate. This shows the feasibility of automated detection of unethical behavior in OSS projects.
|
2401.08903
|
Fengfan Zhou
|
Fengfan Zhou, Qianyu Zhou, Bangjie Yin, Hui Zheng, Xuequan Lu,
Lizhuang Ma, Hefei Ling
|
Rethinking Impersonation and Dodging Attacks on Face Recognition Systems
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face Recognition (FR) systems can be easily deceived by adversarial examples
that manipulate benign face images through imperceptible perturbations.
Adversarial attacks on FR encompass two types: impersonation (targeted) attacks
and dodging (untargeted) attacks. Previous methods often achieve a successful
impersonation attack on FR; However, it does not necessarily guarantee a
successful dodging attack on FR in the black-box setting. In this paper, our
key insight is that the generation of adversarial examples should perform both
impersonation and dodging attacks simultaneously. To this end, we propose a
novel attack method termed as Adversarial Pruning (Adv-Pruning), to fine-tune
existing adversarial examples to enhance their dodging capabilities while
preserving their impersonation capabilities. Adv-Pruning consists of Priming,
Pruning, and Restoration stages. Concretely, we propose Adversarial Priority
Quantification to measure the region-wise priority of original adversarial
perturbations, identifying and releasing those with minimal impact on absolute
model output variances. Then, Biased Gradient Adaptation is presented to adapt
the adversarial examples to traverse the decision boundaries of both the
attacker and victim by adding perturbations favoring dodging attacks on the
vacated regions, preserving the prioritized features of the original
perturbations while boosting dodging performance. As a result, we can maintain
the impersonation capabilities of original adversarial examples while
effectively enhancing dodging capabilities. Comprehensive experiments
demonstrate the superiority of our method compared with state-of-the-art
adversarial attacks.
|
[
{
"created": "Wed, 17 Jan 2024 01:10:17 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 02:55:23 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Apr 2024 08:31:00 GMT",
"version": "v3"
}
] |
2024-04-26
|
[
[
"Zhou",
"Fengfan",
""
],
[
"Zhou",
"Qianyu",
""
],
[
"Yin",
"Bangjie",
""
],
[
"Zheng",
"Hui",
""
],
[
"Lu",
"Xuequan",
""
],
[
"Ma",
"Lizhuang",
""
],
[
"Ling",
"Hefei",
""
]
] |
Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations. Adversarial attacks on FR encompass two types: impersonation (targeted) attacks and dodging (untargeted) attacks. Previous methods often achieve a successful impersonation attack on FR; However, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting. In this paper, our key insight is that the generation of adversarial examples should perform both impersonation and dodging attacks simultaneously. To this end, we propose a novel attack method termed as Adversarial Pruning (Adv-Pruning), to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities. Adv-Pruning consists of Priming, Pruning, and Restoration stages. Concretely, we propose Adversarial Priority Quantification to measure the region-wise priority of original adversarial perturbations, identifying and releasing those with minimal impact on absolute model output variances. Then, Biased Gradient Adaptation is presented to adapt the adversarial examples to traverse the decision boundaries of both the attacker and victim by adding perturbations favoring dodging attacks on the vacated regions, preserving the prioritized features of the original perturbations while boosting dodging performance. As a result, we can maintain the impersonation capabilities of original adversarial examples while effectively enhancing dodging capabilities. Comprehensive experiments demonstrate the superiority of our method compared with state-of-the-art adversarial attacks.
|
2403.10293
|
Verena Blaschke
|
Verena Blaschke, Barbara Kova\v{c}i\'c, Siyao Peng, Hinrich Sch\"utze,
Barbara Plank
|
MaiBaam: A Multi-Dialectal Bavarian Universal Dependency Treebank
|
LREC-COLING 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite the success of the Universal Dependencies (UD) project exemplified by
its impressive language breadth, there is still a lack in `within-language
breadth': most treebanks focus on standard languages. Even for German, the
language with the most annotations in UD, so far no treebank exists for one of
its language varieties spoken by over 10M people: Bavarian. To contribute to
closing this gap, we present the first multi-dialect Bavarian treebank
(MaiBaam) manually annotated with part-of-speech and syntactic dependency
information in UD, covering multiple text genres (wiki, fiction, grammar
examples, social, non-fiction). We highlight the morphosyntactic differences
between the closely-related Bavarian and German and showcase the rich
variability of speakers' orthographies. Our corpus includes 15k tokens,
covering dialects from all Bavarian-speaking areas spanning three countries. We
provide baseline parsing and POS tagging results, which are lower than results
obtained on German and vary substantially between different graph-based
parsers. To support further research on Bavarian syntax, we make our dataset,
language-specific guidelines and code publicly available.
|
[
{
"created": "Fri, 15 Mar 2024 13:33:10 GMT",
"version": "v1"
}
] |
2024-03-18
|
[
[
"Blaschke",
"Verena",
""
],
[
"Kovačić",
"Barbara",
""
],
[
"Peng",
"Siyao",
""
],
[
"Schütze",
"Hinrich",
""
],
[
"Plank",
"Barbara",
""
]
] |
Despite the success of the Universal Dependencies (UD) project exemplified by its impressive language breadth, there is still a lack in `within-language breadth': most treebanks focus on standard languages. Even for German, the language with the most annotations in UD, so far no treebank exists for one of its language varieties spoken by over 10M people: Bavarian. To contribute to closing this gap, we present the first multi-dialect Bavarian treebank (MaiBaam) manually annotated with part-of-speech and syntactic dependency information in UD, covering multiple text genres (wiki, fiction, grammar examples, social, non-fiction). We highlight the morphosyntactic differences between the closely-related Bavarian and German and showcase the rich variability of speakers' orthographies. Our corpus includes 15k tokens, covering dialects from all Bavarian-speaking areas spanning three countries. We provide baseline parsing and POS tagging results, which are lower than results obtained on German and vary substantially between different graph-based parsers. To support further research on Bavarian syntax, we make our dataset, language-specific guidelines and code publicly available.
|
1808.10363
|
Mat\'u\v{s} Sul\'ir
|
Mat\'u\v{s} Sul\'ir, Jaroslav Porub\"an, Ondrej Zori\v{c}\'ak
|
IDE-Independent Program Comprehension Tools via Source File Overwriting
| null |
2017 IEEE 14th International Scientific Conference on Informatics,
IEEE, 2017, pp. 372-376
|
10.1109/INFORMATICS.2017.8327277
| null |
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditionally, we have two possibilities to design tools for program
comprehension and analysis. The first option is to create a standalone program,
independent of any source code editor. This way, the act of source code editing
is separated from the act of viewing the code analysis results. The second
option is to create a plugin for a specific IDE (integrated development
environment) - in this case, a separate version must be created for each IDE.
We propose an approach where information about source code elements is written
directly into source files as annotations or special comments. Before
committing to a version control system, the annotations are removed from the
source code to avoid code pollution. We briefly evaluate the approach and
delineate its limitations.
|
[
{
"created": "Thu, 30 Aug 2018 15:45:52 GMT",
"version": "v1"
}
] |
2018-08-31
|
[
[
"Sulír",
"Matúš",
""
],
[
"Porubän",
"Jaroslav",
""
],
[
"Zoričák",
"Ondrej",
""
]
] |
Traditionally, we have two possibilities to design tools for program comprehension and analysis. The first option is to create a standalone program, independent of any source code editor. This way, the act of source code editing is separated from the act of viewing the code analysis results. The second option is to create a plugin for a specific IDE (integrated development environment) - in this case, a separate version must be created for each IDE. We propose an approach where information about source code elements is written directly into source files as annotations or special comments. Before committing to a version control system, the annotations are removed from the source code to avoid code pollution. We briefly evaluate the approach and delineate its limitations.
|
2406.15762
|
Zhichao Chen
|
Zhichao Chen, Haoxuan Li, Fangyikang Wang, Odin Zhang, Hu Xu, Xiaoyu
Jiang, Zhihuan Song, Eric H. Wang
|
Rethinking the Diffusion Models for Numerical Tabular Data Imputation
from the Perspective of Wasserstein Gradient Flow
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion models (DMs) have gained attention in Missing Data Imputation
(MDI), but there remain two long-neglected issues to be addressed: (1).
Inaccurate Imputation, which arises from inherently
sample-diversification-pursuing generative process of DMs. (2). Difficult
Training, which stems from intricate design required for the mask matrix in
model training stage. To address these concerns within the realm of numerical
tabular datasets, we introduce a novel principled approach termed Kernelized
Negative Entropy-regularized Wasserstein gradient flow Imputation (KnewImp).
Specifically, based on Wasserstein gradient flow (WGF) framework, we first
prove that issue (1) stems from the cost functionals implicitly maximized in
DM-based MDI are equivalent to the MDI's objective plus
diversification-promoting non-negative terms. Based on this, we then design a
novel cost functional with diversification-discouraging negative entropy and
derive our KnewImp approach within WGF framework and reproducing kernel Hilbert
space. After that, we prove that the imputation procedure of KnewImp can be
derived from another cost functional related to the joint distribution,
eliminating the need for the mask matrix and hence naturally addressing issue
(2). Extensive experiments demonstrate that our proposed KnewImp approach
significantly outperforms existing state-of-the-art methods.
|
[
{
"created": "Sat, 22 Jun 2024 06:59:32 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Chen",
"Zhichao",
""
],
[
"Li",
"Haoxuan",
""
],
[
"Wang",
"Fangyikang",
""
],
[
"Zhang",
"Odin",
""
],
[
"Xu",
"Hu",
""
],
[
"Jiang",
"Xiaoyu",
""
],
[
"Song",
"Zhihuan",
""
],
[
"Wang",
"Eric H.",
""
]
] |
Diffusion models (DMs) have gained attention in Missing Data Imputation (MDI), but there remain two long-neglected issues to be addressed: (1). Inaccurate Imputation, which arises from inherently sample-diversification-pursuing generative process of DMs. (2). Difficult Training, which stems from intricate design required for the mask matrix in model training stage. To address these concerns within the realm of numerical tabular datasets, we introduce a novel principled approach termed Kernelized Negative Entropy-regularized Wasserstein gradient flow Imputation (KnewImp). Specifically, based on Wasserstein gradient flow (WGF) framework, we first prove that issue (1) stems from the cost functionals implicitly maximized in DM-based MDI are equivalent to the MDI's objective plus diversification-promoting non-negative terms. Based on this, we then design a novel cost functional with diversification-discouraging negative entropy and derive our KnewImp approach within WGF framework and reproducing kernel Hilbert space. After that, we prove that the imputation procedure of KnewImp can be derived from another cost functional related to the joint distribution, eliminating the need for the mask matrix and hence naturally addressing issue (2). Extensive experiments demonstrate that our proposed KnewImp approach significantly outperforms existing state-of-the-art methods.
|
2309.04878
|
Ekzhin Ear
|
Ekzhin Ear, Jose L. C. Remy, Antonia Feffer, Shouhuai Xu
|
Characterizing Cyber Attacks against Space Systems with Missing Data:
Framework and Case Study
|
Accepted for publication: IEEE International Conference on
Communications and Network Security 2023 (IEEE CNS)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cybersecurity of space systems is an emerging topic, but there is no single
dataset that documents cyber attacks against space systems that have occurred
in the past. These incidents are often scattered in media reports while missing
many details, which we dub the missing-data problem. Nevertheless, even
"low-quality" datasets containing such reports would be extremely valuable
because of the dearth of space cybersecurity data and the sensitivity of space
systems which are often restricted from disclosure by governments. This prompts
a research question: How can we characterize real-world cyber attacks against
space systems? In this paper, we address the problem by proposing a framework,
including metrics, while also addressing the missing-data problem, by
"extrapolating" the missing data in a principled fashion. To show the
usefulness of the framework, we extract data for 72 cyber attacks against space
systems and show how to extrapolate this "low-quality" dataset to derive 4,076
attack technique kill chains. Our findings include: cyber attacks against space
systems are getting increasingly sophisticated; and, successful protection
against on-path and social engineering attacks could have prevented 80% of the
attacks.
|
[
{
"created": "Sat, 9 Sep 2023 21:40:00 GMT",
"version": "v1"
}
] |
2023-09-12
|
[
[
"Ear",
"Ekzhin",
""
],
[
"Remy",
"Jose L. C.",
""
],
[
"Feffer",
"Antonia",
""
],
[
"Xu",
"Shouhuai",
""
]
] |
Cybersecurity of space systems is an emerging topic, but there is no single dataset that documents cyber attacks against space systems that have occurred in the past. These incidents are often scattered in media reports while missing many details, which we dub the missing-data problem. Nevertheless, even "low-quality" datasets containing such reports would be extremely valuable because of the dearth of space cybersecurity data and the sensitivity of space systems which are often restricted from disclosure by governments. This prompts a research question: How can we characterize real-world cyber attacks against space systems? In this paper, we address the problem by proposing a framework, including metrics, while also addressing the missing-data problem, by "extrapolating" the missing data in a principled fashion. To show the usefulness of the framework, we extract data for 72 cyber attacks against space systems and show how to extrapolate this "low-quality" dataset to derive 4,076 attack technique kill chains. Our findings include: cyber attacks against space systems are getting increasingly sophisticated; and, successful protection against on-path and social engineering attacks could have prevented 80% of the attacks.
|
2305.09204
|
Yifan Jiang
|
Yifan Jiang, Shane Steinert-Threlkeld
|
The Weighted M\"obius Score: A Unified Framework for Feature Attribution
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature attribution aims to explain the reasoning behind a black-box model's
prediction by identifying the impact of each feature on the prediction. Recent
work has extended feature attribution to interactions between multiple
features. However, the lack of a unified framework has led to a proliferation
of methods that are often not directly comparable. This paper introduces a
parameterized attribution framework -- the Weighted M\"obius Score -- and (i)
shows that many different attribution methods for both individual features and
feature interactions are special cases and (ii) identifies some new methods. By
studying the vector space of attribution methods, our framework utilizes
standard linear algebra tools and provides interpretations in various fields,
including cooperative game theory and causal mediation analysis. We empirically
demonstrate the framework's versatility and effectiveness by applying these
attribution methods to feature interactions in sentiment analysis and
chain-of-thought prompting.
|
[
{
"created": "Tue, 16 May 2023 06:27:27 GMT",
"version": "v1"
}
] |
2023-05-17
|
[
[
"Jiang",
"Yifan",
""
],
[
"Steinert-Threlkeld",
"Shane",
""
]
] |
Feature attribution aims to explain the reasoning behind a black-box model's prediction by identifying the impact of each feature on the prediction. Recent work has extended feature attribution to interactions between multiple features. However, the lack of a unified framework has led to a proliferation of methods that are often not directly comparable. This paper introduces a parameterized attribution framework -- the Weighted M\"obius Score -- and (i) shows that many different attribution methods for both individual features and feature interactions are special cases and (ii) identifies some new methods. By studying the vector space of attribution methods, our framework utilizes standard linear algebra tools and provides interpretations in various fields, including cooperative game theory and causal mediation analysis. We empirically demonstrate the framework's versatility and effectiveness by applying these attribution methods to feature interactions in sentiment analysis and chain-of-thought prompting.
|
2005.10848
|
Surin Ahn
|
Surin Ahn, Ayfer Ozgur and Mert Pilanci
|
Global Multiclass Classification and Dataset Construction via
Heterogeneous Local Experts
|
27 pages, 8 figures, to be published in IEEE Journal on Selected
Areas in Information Theory (JSAIT) - Special Issue on Estimation and
Inference
| null |
10.1109/JSAIT.2020.3041804
| null |
cs.LG cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the domains of dataset construction and crowdsourcing, a notable challenge
is to aggregate labels from a heterogeneous set of labelers, each of whom is
potentially an expert in some subset of tasks (and less reliable in others). To
reduce costs of hiring human labelers or training automated labeling systems,
it is of interest to minimize the number of labelers while ensuring the
reliability of the resulting dataset. We model this as the problem of
performing $K$-class classification using the predictions of smaller
classifiers, each trained on a subset of $[K]$, and derive bounds on the number
of classifiers needed to accurately infer the true class of an unlabeled sample
under both adversarial and stochastic assumptions. By exploiting a connection
to the classical set cover problem, we produce a near-optimal scheme for
designing such configurations of classifiers which recovers the well known
one-vs.-one classification approach as a special case. Experiments with the
MNIST and CIFAR-10 datasets demonstrate the favorable accuracy (compared to a
centralized classifier) of our aggregation scheme applied to classifiers
trained on subsets of the data. These results suggest a new way to
automatically label data or adapt an existing set of local classifiers to
larger-scale multiclass problems.
|
[
{
"created": "Thu, 21 May 2020 18:07:42 GMT",
"version": "v1"
},
{
"created": "Mon, 25 May 2020 04:34:43 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Jan 2021 23:34:36 GMT",
"version": "v3"
}
] |
2021-01-07
|
[
[
"Ahn",
"Surin",
""
],
[
"Ozgur",
"Ayfer",
""
],
[
"Pilanci",
"Mert",
""
]
] |
In the domains of dataset construction and crowdsourcing, a notable challenge is to aggregate labels from a heterogeneous set of labelers, each of whom is potentially an expert in some subset of tasks (and less reliable in others). To reduce costs of hiring human labelers or training automated labeling systems, it is of interest to minimize the number of labelers while ensuring the reliability of the resulting dataset. We model this as the problem of performing $K$-class classification using the predictions of smaller classifiers, each trained on a subset of $[K]$, and derive bounds on the number of classifiers needed to accurately infer the true class of an unlabeled sample under both adversarial and stochastic assumptions. By exploiting a connection to the classical set cover problem, we produce a near-optimal scheme for designing such configurations of classifiers which recovers the well known one-vs.-one classification approach as a special case. Experiments with the MNIST and CIFAR-10 datasets demonstrate the favorable accuracy (compared to a centralized classifier) of our aggregation scheme applied to classifiers trained on subsets of the data. These results suggest a new way to automatically label data or adapt an existing set of local classifiers to larger-scale multiclass problems.
|
2107.12407
|
Shannon Veitch
|
Thomas Humphries, Rasoul Akhavan Mahdavi, Shannon Veitch, Florian
Kerschbaum
|
Selective MPC: Distributed Computation of Differentially Private
Key-Value Statistics
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Key-value data is a naturally occurring data type that has not been
thoroughly investigated in the local trust model. Existing local differentially
private (LDP) solutions for computing statistics over key-value data suffer
from the inherent accuracy limitations of each user adding their own noise.
Multi-party computation (MPC) maintains better accuracy than LDP and similarly
does not require a trusted central party. However, naively applying MPC to
key-value data results in prohibitively expensive computation costs. In this
work, we present selective multi-party computation, a novel approach to
distributed computation that leverages DP leakage to efficiently and accurately
compute statistics over key-value data. By providing each party with a view of
a random subset of the data, we can capture subtractive noise. We prove that
our protocol satisfies pure DP and is provably secure in the combined DP/MPC
model. Our empirical evaluation demonstrates that we can compute statistics
over 10,000 keys in 20 seconds and can scale up to 30 servers while obtaining
results for a single key in under a second.
|
[
{
"created": "Mon, 26 Jul 2021 18:01:19 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2022 15:18:44 GMT",
"version": "v2"
}
] |
2022-08-31
|
[
[
"Humphries",
"Thomas",
""
],
[
"Mahdavi",
"Rasoul Akhavan",
""
],
[
"Veitch",
"Shannon",
""
],
[
"Kerschbaum",
"Florian",
""
]
] |
Key-value data is a naturally occurring data type that has not been thoroughly investigated in the local trust model. Existing local differentially private (LDP) solutions for computing statistics over key-value data suffer from the inherent accuracy limitations of each user adding their own noise. Multi-party computation (MPC) maintains better accuracy than LDP and similarly does not require a trusted central party. However, naively applying MPC to key-value data results in prohibitively expensive computation costs. In this work, we present selective multi-party computation, a novel approach to distributed computation that leverages DP leakage to efficiently and accurately compute statistics over key-value data. By providing each party with a view of a random subset of the data, we can capture subtractive noise. We prove that our protocol satisfies pure DP and is provably secure in the combined DP/MPC model. Our empirical evaluation demonstrates that we can compute statistics over 10,000 keys in 20 seconds and can scale up to 30 servers while obtaining results for a single key in under a second.
|
1209.3353
|
Shipra Agrawal
|
Shipra Agrawal, Navin Goyal
|
Further Optimal Regret Bounds for Thompson Sampling
|
arXiv admin note: substantial text overlap with arXiv:1111.1797
| null | null | null |
cs.LG cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thompson Sampling is one of the oldest heuristics for multi-armed bandit
problems. It is a randomized algorithm based on Bayesian ideas, and has
recently generated significant interest after several studies demonstrated it
to have better empirical performance compared to the state of the art methods.
In this paper, we provide a novel regret analysis for Thompson Sampling that
simultaneously proves both the optimal problem-dependent bound of
$(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the
first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the
expected regret of this algorithm. Our near-optimal problem-independent bound
solves a COLT 2012 open problem of Chapelle and Li. The optimal
problem-dependent regret bound for this problem was first proven recently by
Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are
conceptually simple, easily extend to distributions other than the Beta
distribution, and also extend to the more general contextual bandits setting
[Manuscript, Agrawal and Goyal, 2012].
|
[
{
"created": "Sat, 15 Sep 2012 03:41:18 GMT",
"version": "v1"
}
] |
2012-09-18
|
[
[
"Agrawal",
"Shipra",
""
],
[
"Goyal",
"Navin",
""
]
] |
Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have better empirical performance compared to the state of the art methods. In this paper, we provide a novel regret analysis for Thompson Sampling that simultaneously proves both the optimal problem-dependent bound of $(1+\epsilon)\sum_i \frac{\ln T}{\Delta_i}+O(\frac{N}{\epsilon^2})$ and the first near-optimal problem-independent bound of $O(\sqrt{NT\ln T})$ on the expected regret of this algorithm. Our near-optimal problem-independent bound solves a COLT 2012 open problem of Chapelle and Li. The optimal problem-dependent regret bound for this problem was first proven recently by Kaufmann et al. [ALT 2012]. Our novel martingale-based analysis techniques are conceptually simple, easily extend to distributions other than the Beta distribution, and also extend to the more general contextual bandits setting [Manuscript, Agrawal and Goyal, 2012].
|
2406.11159
|
Siyuan Yu
|
Siyuan Yu, Wei Chen, H. Vincent Poor
|
Distributed Stochastic Gradient Descent with Staleness: A Stochastic
Delay Differential Equation Based Framework
|
13 pages, 9 figures
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed stochastic gradient descent (SGD) has attracted considerable
recent attention due to its potential for scaling computational resources,
reducing training time, and helping protect user privacy in machine learning.
However, the staggers and limited bandwidth may induce random
computational/communication delays, thereby severely hindering the learning
process. Therefore, how to accelerate asynchronous SGD by efficiently
scheduling multiple workers is an important issue. In this paper, a unified
framework is presented to analyze and optimize the convergence of asynchronous
SGD based on stochastic delay differential equations (SDDEs) and the Poisson
approximation of aggregated gradient arrivals. In particular, we present the
run time and staleness of distributed SGD without a memorylessness assumption
on the computation times. Given the learning rate, we reveal the relevant
SDDE's damping coefficient and its delay statistics, as functions of the number
of activated clients, staleness threshold, the eigenvalues of the Hessian
matrix of the objective function, and the overall computational/communication
delay. The formulated SDDE allows us to present both the distributed SGD's
convergence condition and speed by calculating its characteristic roots,
thereby optimizing the scheduling policies for asynchronous/event-triggered
SGD. It is interestingly shown that increasing the number of activated workers
does not necessarily accelerate distributed SGD due to staleness. Moreover, a
small degree of staleness does not necessarily slow down the convergence, while
a large degree of staleness will result in the divergence of distributed SGD.
Numerical results demonstrate the potential of our SDDE framework, even in
complex learning tasks with non-convex objective functions.
|
[
{
"created": "Mon, 17 Jun 2024 02:56:55 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Yu",
"Siyuan",
""
],
[
"Chen",
"Wei",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
Distributed stochastic gradient descent (SGD) has attracted considerable recent attention due to its potential for scaling computational resources, reducing training time, and helping protect user privacy in machine learning. However, the staggers and limited bandwidth may induce random computational/communication delays, thereby severely hindering the learning process. Therefore, how to accelerate asynchronous SGD by efficiently scheduling multiple workers is an important issue. In this paper, a unified framework is presented to analyze and optimize the convergence of asynchronous SGD based on stochastic delay differential equations (SDDEs) and the Poisson approximation of aggregated gradient arrivals. In particular, we present the run time and staleness of distributed SGD without a memorylessness assumption on the computation times. Given the learning rate, we reveal the relevant SDDE's damping coefficient and its delay statistics, as functions of the number of activated clients, staleness threshold, the eigenvalues of the Hessian matrix of the objective function, and the overall computational/communication delay. The formulated SDDE allows us to present both the distributed SGD's convergence condition and speed by calculating its characteristic roots, thereby optimizing the scheduling policies for asynchronous/event-triggered SGD. It is interestingly shown that increasing the number of activated workers does not necessarily accelerate distributed SGD due to staleness. Moreover, a small degree of staleness does not necessarily slow down the convergence, while a large degree of staleness will result in the divergence of distributed SGD. Numerical results demonstrate the potential of our SDDE framework, even in complex learning tasks with non-convex objective functions.
|
1908.05293
|
Rahul Mitra
|
Rahul Mitra, Nitesh B. Gundavarapu, Abhishek Sharma, Arjun Jain
|
Multiview-Consistent Semi-Supervised Learning for 3D Human Pose
Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The best performing methods for 3D human pose estimation from monocular
images require large amounts of in-the-wild 2D and controlled 3D pose annotated
datasets which are costly and require sophisticated systems to acquire. To
reduce this annotation dependency, we propose Multiview-Consistent Semi
Supervised Learning (MCSS) framework that utilizes similarity in pose
information from unannotated, uncalibrated but synchronized multi-view videos
of human motions as additional weak supervision signal to guide 3D human pose
regression. Our framework applies hard-negative mining based on temporal
relations in multi-view videos to arrive at a multi-view consistent pose
embedding. When jointly trained with limited 3D pose annotations, our approach
improves the baseline by 25% and state-of-the-art by 8.7%, whilst using
substantially smaller networks. Lastly, but importantly, we demonstrate the
advantages of the learned embedding and establish view-invariant pose retrieval
benchmarks on two popular, publicly available multi-view human pose datasets,
Human 3.6M and MPI-INF-3DHP, to facilitate future research.
|
[
{
"created": "Wed, 14 Aug 2019 18:13:57 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Nov 2019 06:44:56 GMT",
"version": "v2"
},
{
"created": "Tue, 25 Feb 2020 06:14:42 GMT",
"version": "v3"
}
] |
2020-02-26
|
[
[
"Mitra",
"Rahul",
""
],
[
"Gundavarapu",
"Nitesh B.",
""
],
[
"Sharma",
"Abhishek",
""
],
[
"Jain",
"Arjun",
""
]
] |
The best performing methods for 3D human pose estimation from monocular images require large amounts of in-the-wild 2D and controlled 3D pose annotated datasets which are costly and require sophisticated systems to acquire. To reduce this annotation dependency, we propose Multiview-Consistent Semi Supervised Learning (MCSS) framework that utilizes similarity in pose information from unannotated, uncalibrated but synchronized multi-view videos of human motions as additional weak supervision signal to guide 3D human pose regression. Our framework applies hard-negative mining based on temporal relations in multi-view videos to arrive at a multi-view consistent pose embedding. When jointly trained with limited 3D pose annotations, our approach improves the baseline by 25% and state-of-the-art by 8.7%, whilst using substantially smaller networks. Lastly, but importantly, we demonstrate the advantages of the learned embedding and establish view-invariant pose retrieval benchmarks on two popular, publicly available multi-view human pose datasets, Human 3.6M and MPI-INF-3DHP, to facilitate future research.
|
1804.07376
|
Ashkan Yousefpour
|
Ashkan Yousefpour, Genya Ishigaki, Riti Gour, Jason P. Jue
|
On Reducing IoT Service Delay via Fog Offloading
| null |
IEEE Internet of Things Journal, vol. 5, no. 2, pp. 998-1010,
April 2018
|
10.1109/JIOT.2017.2788802
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the Internet of Things (IoT) becoming a major component of our daily
life, understanding how to improve the quality of service (QoS) for IoT
applications through fog computing is becoming an important problem. In this
paper, we introduce a general framework for IoT-fog-cloud applications, and
propose a delay-minimizing collaboration and offloading policy for fog-capable
devices that aims to reduce the service delay for IoT applications. We then
develop an analytical model to evaluate our policy and show how the proposed
framework helps to reduce IoT service delay.
|
[
{
"created": "Thu, 19 Apr 2018 20:58:04 GMT",
"version": "v1"
}
] |
2018-04-23
|
[
[
"Yousefpour",
"Ashkan",
""
],
[
"Ishigaki",
"Genya",
""
],
[
"Gour",
"Riti",
""
],
[
"Jue",
"Jason P.",
""
]
] |
With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve the quality of service (QoS) for IoT applications through fog computing is becoming an important problem. In this paper, we introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing collaboration and offloading policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.
|
1911.00238
|
Takato Horii
|
Kyoichiro Kobayashi, Takato Horii, Ryo Iwaki, Yukie Nagai and Minoru
Asada
|
Situated GAIL: Multitask imitation using task-conditioned adversarial
inverse reinforcement learning
|
Submitted to Advanced Robotics
| null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative adversarial imitation learning (GAIL) has attracted increasing
attention in the field of robot learning. It enables robots to learn a policy
to achieve a task demonstrated by an expert while simultaneously estimating the
reward function behind the expert's behaviors. However, this framework is
limited to learning a single task with a single reward function. This study
proposes an extended framework called situated GAIL (S-GAIL), in which a task
variable is introduced to both the discriminator and generator of the GAIL
framework. The task variable has the roles of discriminating different contexts
and making the framework learn different reward functions and policies for
multiple tasks. To achieve the early convergence of learning and robustness
during reward estimation, we introduce a term to adjust the entropy
regularization coefficient in the generator's objective function. Our
experiments using two setups (navigation in a discrete grid world and arm
reaching in a continuous space) demonstrate that the proposed framework can
acquire multiple reward functions and policies more effectively than existing
frameworks. The task variable enables our framework to differentiate contexts
while sharing common knowledge among multiple tasks.
|
[
{
"created": "Fri, 1 Nov 2019 07:50:30 GMT",
"version": "v1"
}
] |
2019-11-04
|
[
[
"Kobayashi",
"Kyoichiro",
""
],
[
"Horii",
"Takato",
""
],
[
"Iwaki",
"Ryo",
""
],
[
"Nagai",
"Yukie",
""
],
[
"Asada",
"Minoru",
""
]
] |
Generative adversarial imitation learning (GAIL) has attracted increasing attention in the field of robot learning. It enables robots to learn a policy to achieve a task demonstrated by an expert while simultaneously estimating the reward function behind the expert's behaviors. However, this framework is limited to learning a single task with a single reward function. This study proposes an extended framework called situated GAIL (S-GAIL), in which a task variable is introduced to both the discriminator and generator of the GAIL framework. The task variable has the roles of discriminating different contexts and making the framework learn different reward functions and policies for multiple tasks. To achieve the early convergence of learning and robustness during reward estimation, we introduce a term to adjust the entropy regularization coefficient in the generator's objective function. Our experiments using two setups (navigation in a discrete grid world and arm reaching in a continuous space) demonstrate that the proposed framework can acquire multiple reward functions and policies more effectively than existing frameworks. The task variable enables our framework to differentiate contexts while sharing common knowledge among multiple tasks.
|
1408.1292
|
Ilja Kuzborskij
|
Ilja Kuzborskij, Francesco Orabona, Barbara Caputo
|
Scalable Greedy Algorithms for Transfer Learning
| null | null |
10.1016/j.cviu.2016.09.003
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples.
|
[
{
"created": "Wed, 6 Aug 2014 14:27:57 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Dec 2014 15:56:53 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Oct 2015 10:27:39 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Jun 2016 00:17:50 GMT",
"version": "v4"
}
] |
2016-09-16
|
[
[
"Kuzborskij",
"Ilja",
""
],
[
"Orabona",
"Francesco",
""
],
[
"Caputo",
"Barbara",
""
]
] |
In this paper we consider the binary transfer learning problem, focusing on how to select and combine sources from a large pool to yield a good performance on a target task. Constraining our scenario to real world, we do not assume the direct access to the source data, but rather we employ the source hypotheses trained from them. We propose an efficient algorithm that selects relevant source hypotheses and feature dimensions simultaneously, building on the literature on the best subset selection problem. Our algorithm achieves state-of-the-art results on three computer vision datasets, substantially outperforming both transfer learning and popular feature selection baselines in a small-sample setting. We also present a randomized variant that achieves the same results with the computational cost independent from the number of source hypotheses and feature dimensions. Also, we theoretically prove that, under reasonable assumptions on the source hypotheses, our algorithm can learn effectively from few examples.
|
1308.1464
|
Andy Terrel
|
Andy R. Terrel and Kyle T. Mandli
|
ManyClaw: Slicing and dicing Riemann solvers for next generation highly
parallel architectures
|
TACC-Intel Symposium on Highly Parallel Architectures. 2012
| null | null | null |
cs.CE cs.MS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Next generation computer architectures will include order of magnitude more
intra-node parallelism; however, many application programmers have a difficult
time keeping their codes current with the state-of-the-art machines. In this
context, we analyze Hyperbolic PDE solvers, which are used in the solution of
many important applications in science and engineering. We present ManyClaw, a
project intended to explore the exploitation of intra-node parallelism in
hyperbolic PDE solvers via the Clawpack software package for solving hyperbolic
PDEs. Our goal is to separate the low level parallelism and the physical
equations thus providing users the capability to leverage intra-node
parallelism without explicitly writing code to take advantage of newer
architectures.
|
[
{
"created": "Wed, 7 Aug 2013 02:24:20 GMT",
"version": "v1"
}
] |
2013-08-08
|
[
[
"Terrel",
"Andy R.",
""
],
[
"Mandli",
"Kyle T.",
""
]
] |
Next generation computer architectures will include order of magnitude more intra-node parallelism; however, many application programmers have a difficult time keeping their codes current with the state-of-the-art machines. In this context, we analyze Hyperbolic PDE solvers, which are used in the solution of many important applications in science and engineering. We present ManyClaw, a project intended to explore the exploitation of intra-node parallelism in hyperbolic PDE solvers via the Clawpack software package for solving hyperbolic PDEs. Our goal is to separate the low level parallelism and the physical equations thus providing users the capability to leverage intra-node parallelism without explicitly writing code to take advantage of newer architectures.
|
1411.0154
|
Ferruccio Guidi Dr
|
Ferruccio Guidi
|
Extending the Applicability Condition in the Formal System
$\lambda\delta$
|
36 pages, updated to appear as a technical report
| null | null |
AMS-Acta 4411
|
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The formal system $\lambda\delta$ is a typed lambda calculus derived from
$\Lambda_\infty$, aiming to support the foundations of Mathematics that require
an underlying theory of expressions (for example the Minimal Type Theory). The
system is developed in the context of the Hypertextual Electronic Library of
Mathematics as a machine-checked digital specification, that is not the formal
counterpart of previous informal material. The first version of the calculus
appeared in 2006 and proved unsatisfactory for some reasons. In this article we
present a revised version of the system and we prove three relevant desired
properties: the confluence of reduction, the strong normalization of an
extended form of reduction, known as the "big tree" theorem, and the
preservation of validity by reduction. To our knowledge, we are presenting here
the first fully machine-checked proof of the "big tree" theorem for a calculus
that includes $\Lambda_\infty$.
|
[
{
"created": "Sat, 1 Nov 2014 18:58:40 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Mar 2015 14:49:56 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Nov 2019 22:41:24 GMT",
"version": "v3"
}
] |
2019-12-02
|
[
[
"Guidi",
"Ferruccio",
""
]
] |
The formal system $\lambda\delta$ is a typed lambda calculus derived from $\Lambda_\infty$, aiming to support the foundations of Mathematics that require an underlying theory of expressions (for example the Minimal Type Theory). The system is developed in the context of the Hypertextual Electronic Library of Mathematics as a machine-checked digital specification, that is not the formal counterpart of previous informal material. The first version of the calculus appeared in 2006 and proved unsatisfactory for some reasons. In this article we present a revised version of the system and we prove three relevant desired properties: the confluence of reduction, the strong normalization of an extended form of reduction, known as the "big tree" theorem, and the preservation of validity by reduction. To our knowledge, we are presenting here the first fully machine-checked proof of the "big tree" theorem for a calculus that includes $\Lambda_\infty$.
|
2205.00893
|
Emmanuel Kwarteng
|
Emmanuel Kwarteng (PhD Candidate), Dr. Mumin Cebe
|
A Survey on Security Issues in Modern Implantable Devices: Solutions and
Future Issues
|
There are 18 pages including reference pages, 5 figures, and 4 tables
submitted to Smart Health by Elsevier. Emmanuel Kwarteng: Conceptualization,
Investigation, Resources, Methodology, Writing-Original Draft, Visualization.
Mumin Cebe: Writing-Review & Editing, Validation, Supervision
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Implantable Medical Devices (IMD) is a fast pace growing medical field and
continues to grow in the foreseeable future. Advancement in science and
technology has led to the IMD devices offering advanced medical treatments.
Modern IMDs can automatically monitor and manage different patients' health
conditions without any manual intervention from medical professionals. While
IMDs are also becoming more connected to enhance the delivery of care remotely
and provide the means for both patients and physicians to adjust therapy at the
comfort of their homes, it also increases security related concerns.
Adversaries could take advantage and exploit device vulnerabilities to
manipulate device settings remotely from anywhere around the world. This
manuscript reviews the current threats, security goals, and proposed solutions
by comparing them with their strengths and limitations. We also highlight the
emerging IMD technologies and innovative ideas for new designs and
implementations to improve the security of IMDs. Finally, we conclude the
article with future research directions toward securing IMD systems to light
the way for researchers.
|
[
{
"created": "Mon, 2 May 2022 13:03:41 GMT",
"version": "v1"
}
] |
2022-05-03
|
[
[
"Kwarteng",
"Emmanuel",
"",
"PhD Candidate"
],
[
"Cebe",
"Dr. Mumin",
""
]
] |
Implantable Medical Devices (IMD) is a fast pace growing medical field and continues to grow in the foreseeable future. Advancement in science and technology has led to the IMD devices offering advanced medical treatments. Modern IMDs can automatically monitor and manage different patients' health conditions without any manual intervention from medical professionals. While IMDs are also becoming more connected to enhance the delivery of care remotely and provide the means for both patients and physicians to adjust therapy at the comfort of their homes, it also increases security related concerns. Adversaries could take advantage and exploit device vulnerabilities to manipulate device settings remotely from anywhere around the world. This manuscript reviews the current threats, security goals, and proposed solutions by comparing them with their strengths and limitations. We also highlight the emerging IMD technologies and innovative ideas for new designs and implementations to improve the security of IMDs. Finally, we conclude the article with future research directions toward securing IMD systems to light the way for researchers.
|
2302.01714
|
Muah Kim
|
Muah Kim, Rick Fritschek, Rafael F. Schaefer
|
Learning End-to-End Channel Coding with Diffusion Models
|
6 pages, WSA/SCC 2023
| null | null | null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is a known problem that deep-learning-based end-to-end (E2E) channel
coding systems depend on a known and differentiable channel model, due to the
learning process and based on the gradient-descent optimization methods. This
places the challenge to approximate or generate the channel or its derivative
from samples generated by pilot signaling in real-world scenarios. Currently,
there are two prevalent methods to solve this problem. One is to generate the
channel via a generative adversarial network (GAN), and the other is to, in
essence, approximate the gradient via reinforcement learning methods. Other
methods include using score-based methods, variational autoencoders, or
mutual-information-based methods. In this paper, we focus on generative models
and, in particular, on a new promising method called diffusion models, which
have shown a higher quality of generation in image-based tasks. We will show
that diffusion models can be used in wireless E2E scenarios and that they work
as good as Wasserstein GANs while having a more stable training procedure and a
better generalization ability in testing.
|
[
{
"created": "Fri, 3 Feb 2023 13:11:57 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Nov 2023 14:54:04 GMT",
"version": "v2"
}
] |
2023-11-30
|
[
[
"Kim",
"Muah",
""
],
[
"Fritschek",
"Rick",
""
],
[
"Schaefer",
"Rafael F.",
""
]
] |
It is a known problem that deep-learning-based end-to-end (E2E) channel coding systems depend on a known and differentiable channel model, due to the learning process and based on the gradient-descent optimization methods. This places the challenge to approximate or generate the channel or its derivative from samples generated by pilot signaling in real-world scenarios. Currently, there are two prevalent methods to solve this problem. One is to generate the channel via a generative adversarial network (GAN), and the other is to, in essence, approximate the gradient via reinforcement learning methods. Other methods include using score-based methods, variational autoencoders, or mutual-information-based methods. In this paper, we focus on generative models and, in particular, on a new promising method called diffusion models, which have shown a higher quality of generation in image-based tasks. We will show that diffusion models can be used in wireless E2E scenarios and that they work as good as Wasserstein GANs while having a more stable training procedure and a better generalization ability in testing.
|
2107.13423
|
Guangliang Pan
|
Guangliang Pan, Zitong Liu, Wei Wang, Minglei Li
|
A Signal Detection Scheme Based on Deep Learning in OFDM Systems
| null | null | null | null |
cs.IT cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Channel estimation and signal detection are essential steps to ensure the
quality of end-to-end communication in orthogonal frequency-division
multiplexing (OFDM) systems. In this paper, we develop a DDLSD approach, i.e.,
Data-driven Deep Learning for Signal Detection in OFDM systems. First, the OFDM
system model is established. Then, the long short-term memory (LSTM) is
introduced into the OFDM system model. Wireless channel data is generated
through simulation, the preprocessed time series feature information is input
into the LSTM to complete the offline training. Finally, the trained model is
used for online recovery of transmitted signal. The difference between this
scheme and existing OFDM receiver is that explicit estimated channel state
information (CSI) is transformed into invisible estimated CSI, and the transmit
symbol is directly restored. Simulation results show that the DDLSD scheme
outperforms the existing traditional methods in terms of improving channel
estimation and signal detection performance.
|
[
{
"created": "Sat, 24 Jul 2021 04:25:46 GMT",
"version": "v1"
}
] |
2021-07-29
|
[
[
"Pan",
"Guangliang",
""
],
[
"Liu",
"Zitong",
""
],
[
"Wang",
"Wei",
""
],
[
"Li",
"Minglei",
""
]
] |
Channel estimation and signal detection are essential steps to ensure the quality of end-to-end communication in orthogonal frequency-division multiplexing (OFDM) systems. In this paper, we develop a DDLSD approach, i.e., Data-driven Deep Learning for Signal Detection in OFDM systems. First, the OFDM system model is established. Then, the long short-term memory (LSTM) is introduced into the OFDM system model. Wireless channel data is generated through simulation, the preprocessed time series feature information is input into the LSTM to complete the offline training. Finally, the trained model is used for online recovery of transmitted signal. The difference between this scheme and existing OFDM receiver is that explicit estimated channel state information (CSI) is transformed into invisible estimated CSI, and the transmit symbol is directly restored. Simulation results show that the DDLSD scheme outperforms the existing traditional methods in terms of improving channel estimation and signal detection performance.
|
2406.06045
|
Ke Niu
|
Ke Niu, Haiyang Yu, Xuelin Qian, Teng Fu, Bin Li, and Xiangyang Xue
|
Synthesizing Efficient Data with Diffusion Models for Person
Re-Identification Pre-Training
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing person re-identification (Re-ID) methods principally deploy the
ImageNet-1K dataset for model initialization, which inevitably results in
sub-optimal situations due to the large domain gap. One of the key challenges
is that building large-scale person Re-ID datasets is time-consuming. Some
previous efforts address this problem by collecting person images from the
internet e.g., LUPerson, but it struggles to learn from unlabeled,
uncontrollable, and noisy data. In this paper, we present a novel paradigm
Diffusion-ReID to efficiently augment and generate diverse images based on
known identities without requiring any cost of data collection and annotation.
Technically, this paradigm unfolds in two stages: generation and filtering.
During the generation stage, we propose Language Prompts Enhancement (LPE) to
ensure the ID consistency between the input image sequence and the generated
images. In the diffusion process, we propose a Diversity Injection (DI) module
to increase attribute diversity. In order to make the generated data have
higher quality, we apply a Re-ID confidence threshold filter to further remove
the low-quality images. Benefiting from our proposed paradigm, we first create
a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K
images from 5,183 identities. Next, we build a stronger person Re-ID backbone
pre-trained on our Diff-Person. Extensive experiments are conducted on four
person Re-ID benchmarks in six widely used settings. Compared with other
pre-training and self-supervised competitors, our approach shows significant
superiority.
|
[
{
"created": "Mon, 10 Jun 2024 06:26:03 GMT",
"version": "v1"
}
] |
2024-06-11
|
[
[
"Niu",
"Ke",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Qian",
"Xuelin",
""
],
[
"Fu",
"Teng",
""
],
[
"Li",
"Bin",
""
],
[
"Xue",
"Xiangyang",
""
]
] |
Existing person re-identification (Re-ID) methods principally deploy the ImageNet-1K dataset for model initialization, which inevitably results in sub-optimal situations due to the large domain gap. One of the key challenges is that building large-scale person Re-ID datasets is time-consuming. Some previous efforts address this problem by collecting person images from the internet e.g., LUPerson, but it struggles to learn from unlabeled, uncontrollable, and noisy data. In this paper, we present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities without requiring any cost of data collection and annotation. Technically, this paradigm unfolds in two stages: generation and filtering. During the generation stage, we propose Language Prompts Enhancement (LPE) to ensure the ID consistency between the input image sequence and the generated images. In the diffusion process, we propose a Diversity Injection (DI) module to increase attribute diversity. In order to make the generated data have higher quality, we apply a Re-ID confidence threshold filter to further remove the low-quality images. Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities. Next, we build a stronger person Re-ID backbone pre-trained on our Diff-Person. Extensive experiments are conducted on four person Re-ID benchmarks in six widely used settings. Compared with other pre-training and self-supervised competitors, our approach shows significant superiority.
|
2307.10751
|
Advait Sarkar
|
Advait Sarkar
|
Exploring Perspectives on the Impact of Artificial Intelligence on the
Creativity of Knowledge Work: Beyond Mechanised Plagiarism and Stochastic
Parrots
|
Advait Sarkar. 2023. Exploring Perspectives on the Impact of
Artificial Intelligence on the Creativity of Knowledge Work Beyond Mechanised
Plagiarism and Stochastic Parrots. In Annual Symposium on Human-Computer
Interaction for Work 2023 (CHIWORK 2023), June 13-16, 2023, Oldenburg,
Germany. ACM, New York, NY, USA, 17 pages
| null |
10.1145/3596671.3597650
| null |
cs.HC cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial Intelligence (AI), and in particular generative models, are
transformative tools for knowledge work. They problematise notions of
creativity, originality, plagiarism, the attribution of credit, and copyright
ownership. Critics of generative models emphasise the reliance on large amounts
of training data, and view the output of these models as no more than
randomised plagiarism, remix, or collage of the source data. On these grounds,
many have argued for stronger regulations on the deployment, use, and
attribution of the output of these models. However, these issues are not new or
unique to artificial intelligence. In this position paper, using examples from
literary criticism, the history of art, and copyright law, I show how
creativity and originality resist definition as a notatable or
information-theoretic property of an object, and instead can be seen as the
property of a process, an author, or a viewer. Further alternative views hold
that all creative work is essentially reuse (mostly without attribution), or
that randomness itself can be creative. I suggest that creativity is ultimately
defined by communities of creators and receivers, and the deemed sources of
creativity in a workflow often depend on which parts of the workflow can be
automated. Using examples from recent studies of AI in creative knowledge work,
I suggest that AI shifts knowledge work from material production to critical
integration. This position paper aims to begin a conversation around a more
nuanced approach to the problems of creativity and credit assignment for
generative models, one which more fully recognises the importance of the
creative and curatorial voice of the users of these models and moves away from
simpler notational or information-theoretic views.
|
[
{
"created": "Thu, 20 Jul 2023 10:26:57 GMT",
"version": "v1"
}
] |
2023-07-21
|
[
[
"Sarkar",
"Advait",
""
]
] |
Artificial Intelligence (AI), and in particular generative models, are transformative tools for knowledge work. They problematise notions of creativity, originality, plagiarism, the attribution of credit, and copyright ownership. Critics of generative models emphasise the reliance on large amounts of training data, and view the output of these models as no more than randomised plagiarism, remix, or collage of the source data. On these grounds, many have argued for stronger regulations on the deployment, use, and attribution of the output of these models. However, these issues are not new or unique to artificial intelligence. In this position paper, using examples from literary criticism, the history of art, and copyright law, I show how creativity and originality resist definition as a notatable or information-theoretic property of an object, and instead can be seen as the property of a process, an author, or a viewer. Further alternative views hold that all creative work is essentially reuse (mostly without attribution), or that randomness itself can be creative. I suggest that creativity is ultimately defined by communities of creators and receivers, and the deemed sources of creativity in a workflow often depend on which parts of the workflow can be automated. Using examples from recent studies of AI in creative knowledge work, I suggest that AI shifts knowledge work from material production to critical integration. This position paper aims to begin a conversation around a more nuanced approach to the problems of creativity and credit assignment for generative models, one which more fully recognises the importance of the creative and curatorial voice of the users of these models and moves away from simpler notational or information-theoretic views.
|
cs/0402009
|
Richard McClatchey
|
F Estrella, C del Frate, T Hauer, R McClatchey, M Odeh, D Rogulin, S R
Amendolia, D Schottlander, T Solomonides, R Warren
|
Resolving Clinicians Queries Across a Grids Infrastructure
|
8 pages, 3 figures. Presented at the 2nd Int Conf on HealthGrids
Clermont-Ferrand, France January 2004 and accepted by Methods of Information
in Medicine
| null | null | null |
cs.DB cs.SE
| null |
The past decade has witnessed order of magnitude increases in computing
power, data storage capacity and network speed, giving birth to applications
which may handle large data volumes of increased complexity, distributed over
the Internet. Grids computing promises to resolve many of the difficulties in
facilitating medical image analysis to allow radiologists to collaborate
without having to co-locate. The EU-funded MammoGrid project aims to
investigate the feasibility of developing a Grid-enabled European database of
mammograms and provide an information infrastructure which federates multiple
mammogram databases. This will enable clinicians to develop new common,
collaborative and co-operative approaches to the analysis of mammographic data.
This paper focuses on one of the key requirements for large-scale distributed
mammogram analysis: resolving queries across a grid-connected federation of
images.
|
[
{
"created": "Tue, 3 Feb 2004 14:32:39 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Estrella",
"F",
""
],
[
"del Frate",
"C",
""
],
[
"Hauer",
"T",
""
],
[
"McClatchey",
"R",
""
],
[
"Odeh",
"M",
""
],
[
"Rogulin",
"D",
""
],
[
"Amendolia",
"S R",
""
],
[
"Schottlander",
"D",
""
],
[
"Solomonides",
"T",
""
],
[
"Warren",
"R",
""
]
] |
The past decade has witnessed order of magnitude increases in computing power, data storage capacity and network speed, giving birth to applications which may handle large data volumes of increased complexity, distributed over the Internet. Grids computing promises to resolve many of the difficulties in facilitating medical image analysis to allow radiologists to collaborate without having to co-locate. The EU-funded MammoGrid project aims to investigate the feasibility of developing a Grid-enabled European database of mammograms and provide an information infrastructure which federates multiple mammogram databases. This will enable clinicians to develop new common, collaborative and co-operative approaches to the analysis of mammographic data. This paper focuses on one of the key requirements for large-scale distributed mammogram analysis: resolving queries across a grid-connected federation of images.
|
2305.04239
|
Zhitao Liu
|
Zhitao Liu, Zengyu Liu, Jiwei Wei, Guan Wang, Zhenjiang Du, Ning Xie,
Heng Tao Shen
|
Instance-Variant Loss with Gaussian RBF Kernel for 3D Cross-modal
Retriveal
| null | null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D cross-modal retrieval is gaining attention in the multimedia community.
Central to this topic is learning a joint embedding space to represent data
from different modalities, such as images, 3D point clouds, and polygon meshes,
to extract modality-invariant and discriminative features. Hence, the
performance of cross-modal retrieval methods heavily depends on the
representational capacity of this embedding space. Existing methods treat all
instances equally, applying the same penalty strength to instances with varying
degrees of difficulty, ignoring the differences between instances. This can
result in ambiguous convergence or local optima, severely compromising the
separability of the feature space. To address this limitation, we propose an
Instance-Variant loss to assign different penalty strengths to different
instances, improving the space separability. Specifically, we assign different
penalty weights to instances positively related to their intra-class distance.
Simultaneously, we reduce the cross-modal discrepancy between features by
learning a shared weight vector for the same class data from different
modalities. By leveraging the Gaussian RBF kernel to evaluate sample
similarity, we further propose an Intra-Class loss function that minimizes the
intra-class distance among same-class instances. Extensive experiments on three
3D cross-modal datasets show that our proposed method surpasses recent
state-of-the-art approaches.
|
[
{
"created": "Sun, 7 May 2023 10:12:14 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Liu",
"Zhitao",
""
],
[
"Liu",
"Zengyu",
""
],
[
"Wei",
"Jiwei",
""
],
[
"Wang",
"Guan",
""
],
[
"Du",
"Zhenjiang",
""
],
[
"Xie",
"Ning",
""
],
[
"Shen",
"Heng Tao",
""
]
] |
3D cross-modal retrieval is gaining attention in the multimedia community. Central to this topic is learning a joint embedding space to represent data from different modalities, such as images, 3D point clouds, and polygon meshes, to extract modality-invariant and discriminative features. Hence, the performance of cross-modal retrieval methods heavily depends on the representational capacity of this embedding space. Existing methods treat all instances equally, applying the same penalty strength to instances with varying degrees of difficulty, ignoring the differences between instances. This can result in ambiguous convergence or local optima, severely compromising the separability of the feature space. To address this limitation, we propose an Instance-Variant loss to assign different penalty strengths to different instances, improving the space separability. Specifically, we assign different penalty weights to instances positively related to their intra-class distance. Simultaneously, we reduce the cross-modal discrepancy between features by learning a shared weight vector for the same class data from different modalities. By leveraging the Gaussian RBF kernel to evaluate sample similarity, we further propose an Intra-Class loss function that minimizes the intra-class distance among same-class instances. Extensive experiments on three 3D cross-modal datasets show that our proposed method surpasses recent state-of-the-art approaches.
|
2010.01150
|
Xiang Dai
|
Xiang Dai and Sarvnaz Karimi and Ben Hachey and Cecile Paris
|
Cost-effective Selection of Pretraining Data: A Case Study of
Pretraining BERT on Social Media
|
Findings of EMNLP 2020
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies on domain-specific BERT models show that effectiveness on
downstream tasks can be improved when models are pretrained on in-domain data.
Often, the pretraining data used in these models are selected based on their
subject matter, e.g., biology or computer science. Given the range of
applications using social media text, and its unique language variety, we
pretrain two models on tweets and forum text respectively, and empirically
demonstrate the effectiveness of these two resources. In addition, we
investigate how similarity measures can be used to nominate in-domain
pretraining data. We publicly release our pretrained models at
https://bit.ly/35RpTf0.
|
[
{
"created": "Fri, 2 Oct 2020 18:06:31 GMT",
"version": "v1"
}
] |
2020-10-06
|
[
[
"Dai",
"Xiang",
""
],
[
"Karimi",
"Sarvnaz",
""
],
[
"Hachey",
"Ben",
""
],
[
"Paris",
"Cecile",
""
]
] |
Recent studies on domain-specific BERT models show that effectiveness on downstream tasks can be improved when models are pretrained on in-domain data. Often, the pretraining data used in these models are selected based on their subject matter, e.g., biology or computer science. Given the range of applications using social media text, and its unique language variety, we pretrain two models on tweets and forum text respectively, and empirically demonstrate the effectiveness of these two resources. In addition, we investigate how similarity measures can be used to nominate in-domain pretraining data. We publicly release our pretrained models at https://bit.ly/35RpTf0.
|
2405.18042
|
Youngwan Lee
|
Youngwan Lee, Jeffrey Ryan Willette, Jonghee Kim, Sung Ju Hwang
|
Visualizing the loss landscape of Self-supervised Vision Transformer
|
NeurIPS 2023 Workshop: Self-Supervised Learning - Theory and Practice
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Masked autoencoder (MAE) has drawn attention as a representative
self-supervised approach for masked image modeling with vision transformers.
However, even though MAE shows better generalization capability than fully
supervised training from scratch, the reason why has not been explored. In
another line of work, the Reconstruction Consistent Masked Auto Encoder
(RC-MAE), has been proposed which adopts a self-distillation scheme in the form
of an exponential moving average (EMA) teacher into MAE, and it has been shown
that the EMA-teacher performs a conditional gradient correction during
optimization. To further investigate the reason for better generalization of
the self-supervised ViT when trained by MAE (MAE-ViT) and the effect of the
gradient correction of RC-MAE from the perspective of optimization, we
visualize the loss landscapes of the self-supervised vision transformer by both
MAE and RC-MAE and compare them with the supervised ViT (Sup-ViT). Unlike
previous loss landscape visualizations of neural networks based on
classification task loss, we visualize the loss landscape of ViT by computing
pre-training task loss. Through the lens of loss landscapes, we find two
interesting observations: (1) MAE-ViT has a smoother and wider overall loss
curvature than Sup-ViT. (2) The EMA-teacher allows MAE to widen the region of
convexity in both pretraining and linear probing, leading to quicker
convergence. To the best of our knowledge, this work is the first to
investigate the self-supervised ViT through the lens of the loss landscape.
|
[
{
"created": "Tue, 28 May 2024 10:54:26 GMT",
"version": "v1"
}
] |
2024-05-29
|
[
[
"Lee",
"Youngwan",
""
],
[
"Willette",
"Jeffrey Ryan",
""
],
[
"Kim",
"Jonghee",
""
],
[
"Hwang",
"Sung Ju",
""
]
] |
The Masked autoencoder (MAE) has drawn attention as a representative self-supervised approach for masked image modeling with vision transformers. However, even though MAE shows better generalization capability than fully supervised training from scratch, the reason why has not been explored. In another line of work, the Reconstruction Consistent Masked Auto Encoder (RC-MAE), has been proposed which adopts a self-distillation scheme in the form of an exponential moving average (EMA) teacher into MAE, and it has been shown that the EMA-teacher performs a conditional gradient correction during optimization. To further investigate the reason for better generalization of the self-supervised ViT when trained by MAE (MAE-ViT) and the effect of the gradient correction of RC-MAE from the perspective of optimization, we visualize the loss landscapes of the self-supervised vision transformer by both MAE and RC-MAE and compare them with the supervised ViT (Sup-ViT). Unlike previous loss landscape visualizations of neural networks based on classification task loss, we visualize the loss landscape of ViT by computing pre-training task loss. Through the lens of loss landscapes, we find two interesting observations: (1) MAE-ViT has a smoother and wider overall loss curvature than Sup-ViT. (2) The EMA-teacher allows MAE to widen the region of convexity in both pretraining and linear probing, leading to quicker convergence. To the best of our knowledge, this work is the first to investigate the self-supervised ViT through the lens of the loss landscape.
|
1906.04279
|
Zhizhou Ren
|
Zhizhou Ren, Kefan Dong, Yuan Zhou, Qiang Liu, Jian Peng
|
Exploration via Hindsight Goal Generation
|
Thirty-third Conference on Neural Information Processing Systems
(NeurIPS 2019)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Goal-oriented reinforcement learning has recently been a practical framework
for robotic manipulation tasks, in which an agent is required to reach a
certain goal defined by a function on the state space. However, the sparsity of
such reward definition makes traditional reinforcement learning algorithms very
inefficient. Hindsight Experience Replay (HER), a recent advance, has greatly
improved sample efficiency and practical applicability for such problems. It
exploits previous replays by constructing imaginary goals in a simple heuristic
way, acting like an implicit curriculum to alleviate the challenge of sparse
reward signal. In this paper, we introduce Hindsight Goal Generation (HGG), a
novel algorithmic framework that generates valuable hindsight goals which are
easy for an agent to achieve in the short term and are also potential for
guiding the agent to reach the actual goal in the long term. We have
extensively evaluated our goal generation algorithm on a number of robotic
manipulation tasks and demonstrated substantially improvement over the original
HER in terms of sample efficiency.
|
[
{
"created": "Mon, 10 Jun 2019 21:21:18 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Dec 2019 05:35:33 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Dec 2019 04:31:39 GMT",
"version": "v3"
}
] |
2019-12-19
|
[
[
"Ren",
"Zhizhou",
""
],
[
"Dong",
"Kefan",
""
],
[
"Zhou",
"Yuan",
""
],
[
"Liu",
"Qiang",
""
],
[
"Peng",
"Jian",
""
]
] |
Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space. However, the sparsity of such reward definition makes traditional reinforcement learning algorithms very inefficient. Hindsight Experience Replay (HER), a recent advance, has greatly improved sample efficiency and practical applicability for such problems. It exploits previous replays by constructing imaginary goals in a simple heuristic way, acting like an implicit curriculum to alleviate the challenge of sparse reward signal. In this paper, we introduce Hindsight Goal Generation (HGG), a novel algorithmic framework that generates valuable hindsight goals which are easy for an agent to achieve in the short term and are also potential for guiding the agent to reach the actual goal in the long term. We have extensively evaluated our goal generation algorithm on a number of robotic manipulation tasks and demonstrated substantially improvement over the original HER in terms of sample efficiency.
|
2112.13099
|
Amir Shaikhha
|
Amir Shaikhha, Marios Kelepeshis, Mahdi Ghorbani
|
Fine-Tuning Data Structures for Analytical Query Processing
| null | null | null | null |
cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a framework for automatically choosing data structures to
support efficient computation of analytical workloads. Our contributions are
twofold. First, we introduce a novel low-level intermediate language that can
express the algorithms behind various query processing paradigms such as
classical joins, groupjoin, and in-database machine learning engines. This
language is designed around the notion of dictionaries, and allows for a more
fine-grained choice of its low-level implementation. Second, the cost model for
alternative implementations is automatically inferred by combining machine
learning and program reasoning. The dictionary cost model is learned using a
regression model trained over the profiling dataset of dictionary operations on
a given hardware architecture. The program cost model is inferred using static
program analysis.
Our experimental results show the effectiveness of the trained cost model on
micro benchmarks. Furthermore, we show that the performance of the code
generated by our framework either outperforms or is on par with the
state-of-the-art analytical query engines and a recent in-database machine
learning framework.
|
[
{
"created": "Fri, 24 Dec 2021 16:36:35 GMT",
"version": "v1"
}
] |
2021-12-28
|
[
[
"Shaikhha",
"Amir",
""
],
[
"Kelepeshis",
"Marios",
""
],
[
"Ghorbani",
"Mahdi",
""
]
] |
We introduce a framework for automatically choosing data structures to support efficient computation of analytical workloads. Our contributions are twofold. First, we introduce a novel low-level intermediate language that can express the algorithms behind various query processing paradigms such as classical joins, groupjoin, and in-database machine learning engines. This language is designed around the notion of dictionaries, and allows for a more fine-grained choice of its low-level implementation. Second, the cost model for alternative implementations is automatically inferred by combining machine learning and program reasoning. The dictionary cost model is learned using a regression model trained over the profiling dataset of dictionary operations on a given hardware architecture. The program cost model is inferred using static program analysis. Our experimental results show the effectiveness of the trained cost model on micro benchmarks. Furthermore, we show that the performance of the code generated by our framework either outperforms or is on par with the state-of-the-art analytical query engines and a recent in-database machine learning framework.
|
1503.05992
|
Sugata Sanyal
|
Subhamoy Chakraborti, D. P. Acharjya, Sugata Sanyal
|
Application Security framework for Mobile App Development in Enterprise
setup
|
7 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enterprise Mobility has been increasing the reach over the years. Initially
Mobile devices were adopted as consumer devices. However, the enterprises world
over have rightly taken the leap and started using the ubiquitous technology
for managing its employees as well as to reach out to the customers. While the
Mobile ecosystem has been evolving over the years, the increased exposure of
mobility in Enterprise framework have caused major focus on the security
aspects of it. While a significant focus have been put on network security,
this paper discusses on the approach that can be taken at Mobile application
layer, which would reduce the risk to the enterprises.
|
[
{
"created": "Fri, 20 Mar 2015 04:55:50 GMT",
"version": "v1"
}
] |
2015-03-23
|
[
[
"Chakraborti",
"Subhamoy",
""
],
[
"Acharjya",
"D. P.",
""
],
[
"Sanyal",
"Sugata",
""
]
] |
Enterprise Mobility has been increasing the reach over the years. Initially Mobile devices were adopted as consumer devices. However, the enterprises world over have rightly taken the leap and started using the ubiquitous technology for managing its employees as well as to reach out to the customers. While the Mobile ecosystem has been evolving over the years, the increased exposure of mobility in Enterprise framework have caused major focus on the security aspects of it. While a significant focus have been put on network security, this paper discusses on the approach that can be taken at Mobile application layer, which would reduce the risk to the enterprises.
|
2209.07000
|
Shikhar Singh
|
Shikhar Singh, Ehsan Qasemi, Muhao Chen
|
VIPHY: Probing "Visible" Physical Commonsense Knowledge
|
In Progress (under review)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, vision-language models (VLMs) have shown remarkable
performance on visual reasoning tasks (e.g. attributes, location). While such
tasks measure the requisite knowledge to ground and reason over a given visual
instance, they do not, however, measure the ability of VLMs to retain and
generalize such knowledge. In this work, we evaluate their ability to acquire
"visible" physical knowledge -- the information that is easily accessible from
images of static scenes, particularly across the dimensions of object color,
size and space. We build an automatic pipeline to derive a comprehensive
knowledge resource for calibrating and probing these models. Our results
indicate a severe gap between model and human performance across all three
tasks. Furthermore, our caption pretrained baseline (CapBERT) significantly
outperforms VLMs on both size and spatial tasks -- highlighting that despite
sufficient access to ground language with visual modality, they struggle to
retain such knowledge. The dataset and code are available at
https://github.com/Axe--/ViPhy .
|
[
{
"created": "Thu, 15 Sep 2022 02:06:25 GMT",
"version": "v1"
}
] |
2022-09-16
|
[
[
"Singh",
"Shikhar",
""
],
[
"Qasemi",
"Ehsan",
""
],
[
"Chen",
"Muhao",
""
]
] |
In recent years, vision-language models (VLMs) have shown remarkable performance on visual reasoning tasks (e.g. attributes, location). While such tasks measure the requisite knowledge to ground and reason over a given visual instance, they do not, however, measure the ability of VLMs to retain and generalize such knowledge. In this work, we evaluate their ability to acquire "visible" physical knowledge -- the information that is easily accessible from images of static scenes, particularly across the dimensions of object color, size and space. We build an automatic pipeline to derive a comprehensive knowledge resource for calibrating and probing these models. Our results indicate a severe gap between model and human performance across all three tasks. Furthermore, our caption pretrained baseline (CapBERT) significantly outperforms VLMs on both size and spatial tasks -- highlighting that despite sufficient access to ground language with visual modality, they struggle to retain such knowledge. The dataset and code are available at https://github.com/Axe--/ViPhy .
|
2308.10962
|
Adrian Boedtker Ghansah
|
Adrian B. Ghansah, Jeeseop Kim, Maegan Tucker, Aaron D. Ames
|
Humanoid Robot Co-Design: Coupling Hardware Design with Gait Generation
via Hybrid Zero Dynamics
|
7 pages, 6 figures, accepted to CDC 2023
| null | null | null |
cs.RO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selecting robot design parameters can be challenging since these parameters
are often coupled with the performance of the controller and, therefore, the
resulting capabilities of the robot. This leads to a time-consuming and often
expensive process whereby one iterates between designing the robot and manually
evaluating its capabilities. This is particularly challenging for bipedal
robots, where it can be difficult to evaluate the behavior of the system due to
the underlying nonlinear and hybrid dynamics. Thus, in an effort to streamline
the design process of bipedal robots, and maximize their performance, this
paper presents a systematic framework for the co-design of humanoid robots and
their associated walking gaits. To this end, we leverage the framework of
hybrid zero dynamic (HZD) gait generation, which gives a formal approach to the
generation of dynamic walking gaits. The key novelty of this paper is to
consider both virtual constraints associated with the actuators of the robot,
coupled with design virtual constraints that encode the associated parameters
of the robot to be designed. These virtual constraints are combined in an HZD
optimization problem which simultaneously determines the design parameters
while finding a stable walking gait that minimizes a given cost function. The
proposed approach is demonstrated through the design of a novel humanoid robot,
ADAM, wherein its thigh and shin are co-designed so as to yield energy
efficient bipedal locomotion.
|
[
{
"created": "Mon, 21 Aug 2023 18:15:47 GMT",
"version": "v1"
}
] |
2023-08-23
|
[
[
"Ghansah",
"Adrian B.",
""
],
[
"Kim",
"Jeeseop",
""
],
[
"Tucker",
"Maegan",
""
],
[
"Ames",
"Aaron D.",
""
]
] |
Selecting robot design parameters can be challenging since these parameters are often coupled with the performance of the controller and, therefore, the resulting capabilities of the robot. This leads to a time-consuming and often expensive process whereby one iterates between designing the robot and manually evaluating its capabilities. This is particularly challenging for bipedal robots, where it can be difficult to evaluate the behavior of the system due to the underlying nonlinear and hybrid dynamics. Thus, in an effort to streamline the design process of bipedal robots, and maximize their performance, this paper presents a systematic framework for the co-design of humanoid robots and their associated walking gaits. To this end, we leverage the framework of hybrid zero dynamic (HZD) gait generation, which gives a formal approach to the generation of dynamic walking gaits. The key novelty of this paper is to consider both virtual constraints associated with the actuators of the robot, coupled with design virtual constraints that encode the associated parameters of the robot to be designed. These virtual constraints are combined in an HZD optimization problem which simultaneously determines the design parameters while finding a stable walking gait that minimizes a given cost function. The proposed approach is demonstrated through the design of a novel humanoid robot, ADAM, wherein its thigh and shin are co-designed so as to yield energy efficient bipedal locomotion.
|
1911.01156
|
Alun Preece
|
Frank Stein, Alun Preece
|
AAAI FSS-19: Artificial Intelligence in Government and Public Sector
Proceedings
|
Post-symposium proceedings including 18 papers
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in
Government and Public Sector, Arlington, Virginia, USA, November 7-8, 2019
|
[
{
"created": "Mon, 4 Nov 2019 12:26:51 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Nov 2019 08:07:11 GMT",
"version": "v2"
}
] |
2019-12-02
|
[
[
"Stein",
"Frank",
""
],
[
"Preece",
"Alun",
""
]
] |
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA, November 7-8, 2019
|
1107.3245
|
Piotr Frackiewicz
|
Piotr Frackiewicz
|
Quantum information approach to normal representation of extensive games
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We modify the concept of quantum strategic game to make it useful for
extensive form games. We prove that our modification allows to consider the
normal representation of any finite extensive game using the fundamental
concepts of quantum information. The Selten's Horse game and the general form
of two-stage extensive game with perfect information are studied to illustrate
a potential application of our idea. In both examples we use
Eisert-Wilkens-Lewenstein approach as well as Marinatto-Weber approach to
quantization of games.
|
[
{
"created": "Sat, 16 Jul 2011 18:42:17 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jul 2011 05:59:25 GMT",
"version": "v2"
}
] |
2011-07-20
|
[
[
"Frackiewicz",
"Piotr",
""
]
] |
We modify the concept of quantum strategic game to make it useful for extensive form games. We prove that our modification allows to consider the normal representation of any finite extensive game using the fundamental concepts of quantum information. The Selten's Horse game and the general form of two-stage extensive game with perfect information are studied to illustrate a potential application of our idea. In both examples we use Eisert-Wilkens-Lewenstein approach as well as Marinatto-Weber approach to quantization of games.
|
2008.07956
|
Farhan Khawar
|
Farhan Khawar, Leonard Kin Man Poon, Nevin Lianwen Zhang
|
Learning the Structure of Auto-Encoding Recommenders
|
Proceedings of The Web Conference 2020
| null |
10.1145/3366423.3380135
| null |
cs.IR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autoencoder recommenders have recently shown state-of-the-art performance in
the recommendation task due to their ability to model non-linear item
relationships effectively. However, existing autoencoder recommenders use
fully-connected neural network layers and do not employ structure learning.
This can lead to inefficient training, especially when the data is sparse as
commonly found in collaborative filtering. The aforementioned results in lower
generalization ability and reduced performance. In this paper, we introduce
structure learning for autoencoder recommenders by taking advantage of the
inherent item groups present in the collaborative filtering domain. Due to the
nature of items in general, we know that certain items are more related to each
other than to other items. Based on this, we propose a method that first learns
groups of related items and then uses this information to determine the
connectivity structure of an auto-encoding neural network. This results in a
network that is sparsely connected. This sparse structure can be viewed as a
prior that guides the network training. Empirically we demonstrate that the
proposed structure learning enables the autoencoder to converge to a local
optimum with a much smaller spectral norm and generalization error bound than
the fully-connected network. The resultant sparse network considerably
outperforms the state-of-the-art methods like \textsc{Mult-vae/Mult-dae} on
multiple benchmarked datasets even when the same number of parameters and flops
are used. It also has a better cold-start performance.
|
[
{
"created": "Tue, 18 Aug 2020 14:37:40 GMT",
"version": "v1"
}
] |
2020-08-19
|
[
[
"Khawar",
"Farhan",
""
],
[
"Poon",
"Leonard Kin Man",
""
],
[
"Zhang",
"Nevin Lianwen",
""
]
] |
Autoencoder recommenders have recently shown state-of-the-art performance in the recommendation task due to their ability to model non-linear item relationships effectively. However, existing autoencoder recommenders use fully-connected neural network layers and do not employ structure learning. This can lead to inefficient training, especially when the data is sparse as commonly found in collaborative filtering. The aforementioned results in lower generalization ability and reduced performance. In this paper, we introduce structure learning for autoencoder recommenders by taking advantage of the inherent item groups present in the collaborative filtering domain. Due to the nature of items in general, we know that certain items are more related to each other than to other items. Based on this, we propose a method that first learns groups of related items and then uses this information to determine the connectivity structure of an auto-encoding neural network. This results in a network that is sparsely connected. This sparse structure can be viewed as a prior that guides the network training. Empirically we demonstrate that the proposed structure learning enables the autoencoder to converge to a local optimum with a much smaller spectral norm and generalization error bound than the fully-connected network. The resultant sparse network considerably outperforms the state-of-the-art methods like \textsc{Mult-vae/Mult-dae} on multiple benchmarked datasets even when the same number of parameters and flops are used. It also has a better cold-start performance.
|
2210.07312
|
Md Masudur Rahman
|
Md Masudur Rahman, Yexiang Xue
|
Bootstrap Advantage Estimation for Policy Optimization in Reinforcement
Learning
|
Accepted at IEEE ICMLA 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an advantage estimation approach based on data
augmentation for policy optimization. Unlike using data augmentation on the
input to learn value and policy function as existing methods use, our method
uses data augmentation to compute a bootstrap advantage estimation. This
Bootstrap Advantage Estimation (BAE) is then used for learning and updating the
gradient of policy and value function. To demonstrate the effectiveness of our
approach, we conducted experiments on several environments. These environments
are from three benchmarks: Procgen, Deepmind Control, and Pybullet, which
include both image and vector-based observations; discrete and continuous
action spaces. We observe that our method reduces the policy and the value loss
better than the Generalized advantage estimation (GAE) method and eventually
improves cumulative return. Furthermore, our method performs better than two
recently proposed data augmentation techniques (RAD and DRAC). Overall, our
method performs better empirically than baselines in sample efficiency and
generalization, where the agent is tested in unseen environments.
|
[
{
"created": "Thu, 13 Oct 2022 19:30:43 GMT",
"version": "v1"
}
] |
2022-10-17
|
[
[
"Rahman",
"Md Masudur",
""
],
[
"Xue",
"Yexiang",
""
]
] |
This paper proposes an advantage estimation approach based on data augmentation for policy optimization. Unlike using data augmentation on the input to learn value and policy function as existing methods use, our method uses data augmentation to compute a bootstrap advantage estimation. This Bootstrap Advantage Estimation (BAE) is then used for learning and updating the gradient of policy and value function. To demonstrate the effectiveness of our approach, we conducted experiments on several environments. These environments are from three benchmarks: Procgen, Deepmind Control, and Pybullet, which include both image and vector-based observations; discrete and continuous action spaces. We observe that our method reduces the policy and the value loss better than the Generalized advantage estimation (GAE) method and eventually improves cumulative return. Furthermore, our method performs better than two recently proposed data augmentation techniques (RAD and DRAC). Overall, our method performs better empirically than baselines in sample efficiency and generalization, where the agent is tested in unseen environments.
|
2211.05184
|
Zishan Gu
|
Zishan Gu, Jintang Li and Liang Chen
|
Are All Edges Necessary? A Unified Framework for Graph Purification
| null | null | null | null |
cs.SI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Neural Networks (GNNs) as deep learning models working on
graph-structure data have achieved advanced performance in many works. However,
it has been proved repeatedly that, not all edges in a graph are necessary for
the training of machine learning models. In other words, some of the
connections between nodes may bring redundant or even misleading information to
downstream tasks. In this paper, we try to provide a method to drop edges in
order to purify the graph data from a new perspective. Specifically, it is a
framework to purify graphs with the least loss of information, under which the
core problems are how to better evaluate the edges and how to delete the
relatively redundant edges with the least loss of information. To address the
above two problems, we propose several measurements for the evaluation and
different judges and filters for the edge deletion. We also introduce a
residual-iteration strategy and a surrogate model for measurements requiring
unknown information. The experimental results show that our proposed
measurements for KL divergence with constraints to maintain the connectivity of
the graph and delete edges in an iterative way can find out the most edges
while keeping the performance of GNNs. What's more, further experiments show
that this method also achieves the best defense performance against adversarial
attacks.
|
[
{
"created": "Wed, 9 Nov 2022 20:28:25 GMT",
"version": "v1"
}
] |
2022-11-11
|
[
[
"Gu",
"Zishan",
""
],
[
"Li",
"Jintang",
""
],
[
"Chen",
"Liang",
""
]
] |
Graph Neural Networks (GNNs) as deep learning models working on graph-structure data have achieved advanced performance in many works. However, it has been proved repeatedly that, not all edges in a graph are necessary for the training of machine learning models. In other words, some of the connections between nodes may bring redundant or even misleading information to downstream tasks. In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective. Specifically, it is a framework to purify graphs with the least loss of information, under which the core problems are how to better evaluate the edges and how to delete the relatively redundant edges with the least loss of information. To address the above two problems, we propose several measurements for the evaluation and different judges and filters for the edge deletion. We also introduce a residual-iteration strategy and a surrogate model for measurements requiring unknown information. The experimental results show that our proposed measurements for KL divergence with constraints to maintain the connectivity of the graph and delete edges in an iterative way can find out the most edges while keeping the performance of GNNs. What's more, further experiments show that this method also achieves the best defense performance against adversarial attacks.
|
0809.3352
|
Steffen Kuehn
|
Steffen Kuehn
|
Generalized Prediction Intervals for Arbitrary Distributed
High-Dimensional Data
|
13 pages, 3 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper generalizes the traditional statistical concept of prediction
intervals for arbitrary probability density functions in high-dimensional
feature spaces by introducing significance level distributions, which provides
interval-independent probabilities for continuous random variables. The
advantage of the transformation of a probability density function into a
significance level distribution is that it enables one-class classification or
outlier detection in a direct manner.
|
[
{
"created": "Fri, 19 Sep 2008 11:02:39 GMT",
"version": "v1"
}
] |
2008-09-22
|
[
[
"Kuehn",
"Steffen",
""
]
] |
This paper generalizes the traditional statistical concept of prediction intervals for arbitrary probability density functions in high-dimensional feature spaces by introducing significance level distributions, which provides interval-independent probabilities for continuous random variables. The advantage of the transformation of a probability density function into a significance level distribution is that it enables one-class classification or outlier detection in a direct manner.
|
1809.09912
|
Maarten Vanhoof
|
Maarten Vanhoof, Thomas Ploetz, Zbigniew Smoreda
|
Geographical veracity of indicators derived from mobile phone data
|
4 pages, 3 figures, 2 tables. Short paper contributed to the Netmob
2017 conference in Milan
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this contribution we summarize insights on the geographical veracity of
using mobile phone data to create (statistical) indicators. We focus on
problems that persist with spatial allocation, spatial delineation and spatial
aggregation of information obtained from mobile phone data. For each of the
cases, we offer insights from our works on a French CDR dataset and propose
both short and long term solutions. As such, we aim at offering a list of
challenges, and a roadmap for future work on the topic.
|
[
{
"created": "Wed, 26 Sep 2018 11:24:37 GMT",
"version": "v1"
}
] |
2018-09-27
|
[
[
"Vanhoof",
"Maarten",
""
],
[
"Ploetz",
"Thomas",
""
],
[
"Smoreda",
"Zbigniew",
""
]
] |
In this contribution we summarize insights on the geographical veracity of using mobile phone data to create (statistical) indicators. We focus on problems that persist with spatial allocation, spatial delineation and spatial aggregation of information obtained from mobile phone data. For each of the cases, we offer insights from our works on a French CDR dataset and propose both short and long term solutions. As such, we aim at offering a list of challenges, and a roadmap for future work on the topic.
|
2305.06361
|
Chenguang Wang
|
Chenguang Wang, Tianshu Yu
|
Efficient Training of Multi-task Combinarotial Neural Solver with
Multi-armed Bandits
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficiently training a multi-task neural solver for various combinatorial
optimization problems (COPs) has been less studied so far. In this paper, we
propose a general and efficient training paradigm based on multi-armed bandits
to deliver a unified combinarotial multi-task neural solver. To this end, we
resort to the theoretical loss decomposition for multiple tasks under an
encoder-decoder framework, which enables more efficient training via proper
bandit task-sampling algorithms through an intra-task influence matrix. Our
method achieves much higher overall performance with either limited training
budgets or the same training epochs, compared to standard training schedules,
which can be promising for advising efficient training of other multi-task
large models. Additionally, the influence matrix can provide empirical evidence
of some common practices in the area of learning to optimize, which in turn
supports the validity of our approach.
|
[
{
"created": "Wed, 10 May 2023 14:20:34 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Oct 2023 06:35:46 GMT",
"version": "v2"
}
] |
2023-10-10
|
[
[
"Wang",
"Chenguang",
""
],
[
"Yu",
"Tianshu",
""
]
] |
Efficiently training a multi-task neural solver for various combinatorial optimization problems (COPs) has been less studied so far. In this paper, we propose a general and efficient training paradigm based on multi-armed bandits to deliver a unified combinarotial multi-task neural solver. To this end, we resort to the theoretical loss decomposition for multiple tasks under an encoder-decoder framework, which enables more efficient training via proper bandit task-sampling algorithms through an intra-task influence matrix. Our method achieves much higher overall performance with either limited training budgets or the same training epochs, compared to standard training schedules, which can be promising for advising efficient training of other multi-task large models. Additionally, the influence matrix can provide empirical evidence of some common practices in the area of learning to optimize, which in turn supports the validity of our approach.
|
2403.16898
|
Jialun Cao
|
Jialun Cao and Wuqi Zhang and Shing-Chi Cheung
|
Concerned with Data Contamination? Assessing Countermeasures in Code
Language Model
|
Adjust the format so that the layout looks better
| null | null | null |
cs.SE cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Various techniques have been proposed to leverage the capabilities of code
language models (CLMs) for SE tasks. While these techniques typically evaluate
their effectiveness using publicly available datasets, the evaluation can be
subject to data contamination threats where the evaluation datasets have
already been used to train the concerned CLMs. This can significantly affect
the reliability of the evaluation. Different countermeasures have been
suggested to mitigate the data contamination threat. Countermeasures include
using more recent data, curating new data, and refactoring existing data are
introduced, yet it is unclear whether these countermeasures could really
mitigate data contamination threats to model evaluation. To fill the gap, we
systematically study to quantify the impacts of these countermeasures on CLMs'
performance. To facilitate the study, we collected over 2 million Python
functions with timestamps ranging from January 1st, 2018, to December 31st,
2023. The data created before the models' cut-off date are considered
"contaminated data", while the data where the countermeasures are taken are
regarded as "cleansed data". We study the impact of these countermeasures by
investigating the difference in CLMs' performance on contaminated and cleansed
data derived from different countermeasures. Our experiments yield several
interesting observations. For instance, CLMs do not necessarily perform worse
on data after the models' cut-off date; on the contrary, they sometimes perform
better. In addition, refactoring did not always result in decreased
performance; it could lead to improvements instead. Furthermore, existing
metrics such as perplexity cannot distinguish contaminated/cleansed data. We
hope that the results and observations could help deepen the understanding of
CLMs' capabilities and inform the community about data contamination.
|
[
{
"created": "Mon, 25 Mar 2024 16:10:25 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 05:00:47 GMT",
"version": "v2"
}
] |
2024-03-29
|
[
[
"Cao",
"Jialun",
""
],
[
"Zhang",
"Wuqi",
""
],
[
"Cheung",
"Shing-Chi",
""
]
] |
Various techniques have been proposed to leverage the capabilities of code language models (CLMs) for SE tasks. While these techniques typically evaluate their effectiveness using publicly available datasets, the evaluation can be subject to data contamination threats where the evaluation datasets have already been used to train the concerned CLMs. This can significantly affect the reliability of the evaluation. Different countermeasures have been suggested to mitigate the data contamination threat. Countermeasures include using more recent data, curating new data, and refactoring existing data are introduced, yet it is unclear whether these countermeasures could really mitigate data contamination threats to model evaluation. To fill the gap, we systematically study to quantify the impacts of these countermeasures on CLMs' performance. To facilitate the study, we collected over 2 million Python functions with timestamps ranging from January 1st, 2018, to December 31st, 2023. The data created before the models' cut-off date are considered "contaminated data", while the data where the countermeasures are taken are regarded as "cleansed data". We study the impact of these countermeasures by investigating the difference in CLMs' performance on contaminated and cleansed data derived from different countermeasures. Our experiments yield several interesting observations. For instance, CLMs do not necessarily perform worse on data after the models' cut-off date; on the contrary, they sometimes perform better. In addition, refactoring did not always result in decreased performance; it could lead to improvements instead. Furthermore, existing metrics such as perplexity cannot distinguish contaminated/cleansed data. We hope that the results and observations could help deepen the understanding of CLMs' capabilities and inform the community about data contamination.
|
2106.03412
|
Silvia-Laura Pintea
|
Silvia L.Pintea and Nergis Tomen and Stanley F. Goes and Marco Loog
and Jan C. van Gemert
|
Resolution learning in deep convolutional networks using scale-space
theory
|
Preprint accepted by IEEE Transactions on Image Processing, 2021
(TIP). Link to final published article:
https://ieeexplore.ieee.org/abstract/document/9552550
|
IEEE Transactions on Image Processing, vol. 30, pp. 8342-8353,
2021
|
10.1109/TIP.2021.3115001
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Resolution in deep convolutional neural networks (CNNs) is typically bounded
by the receptive field size through filter sizes, and subsampling layers or
strided convolutions on feature maps. The optimal resolution may vary
significantly depending on the dataset. Modern CNNs hard-code their resolution
hyper-parameters in the network architecture which makes tuning such
hyper-parameters cumbersome. We propose to do away with hard-coded resolution
hyper-parameters and aim to learn the appropriate resolution from data. We use
scale-space theory to obtain a self-similar parametrization of filters and make
use of the N-Jet: a truncated Taylor series to approximate a filter by a
learned combination of Gaussian derivative filters. The parameter sigma of the
Gaussian basis controls both the amount of detail the filter encodes and the
spatial extent of the filter. Since sigma is a continuous parameter, we can
optimize it with respect to the loss. The proposed N-Jet layer achieves
comparable performance when used in state-of-the art architectures, while
learning the correct resolution in each layer automatically. We evaluate our
N-Jet layer on both classification and segmentation, and we show that learning
sigma is especially beneficial for inputs at multiple sizes.
|
[
{
"created": "Mon, 7 Jun 2021 08:23:02 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Jun 2021 14:08:16 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Oct 2023 14:22:39 GMT",
"version": "v3"
}
] |
2023-10-25
|
[
[
"Pintea",
"Silvia L.",
""
],
[
"Tomen",
"Nergis",
""
],
[
"Goes",
"Stanley F.",
""
],
[
"Loog",
"Marco",
""
],
[
"van Gemert",
"Jan C.",
""
]
] |
Resolution in deep convolutional neural networks (CNNs) is typically bounded by the receptive field size through filter sizes, and subsampling layers or strided convolutions on feature maps. The optimal resolution may vary significantly depending on the dataset. Modern CNNs hard-code their resolution hyper-parameters in the network architecture which makes tuning such hyper-parameters cumbersome. We propose to do away with hard-coded resolution hyper-parameters and aim to learn the appropriate resolution from data. We use scale-space theory to obtain a self-similar parametrization of filters and make use of the N-Jet: a truncated Taylor series to approximate a filter by a learned combination of Gaussian derivative filters. The parameter sigma of the Gaussian basis controls both the amount of detail the filter encodes and the spatial extent of the filter. Since sigma is a continuous parameter, we can optimize it with respect to the loss. The proposed N-Jet layer achieves comparable performance when used in state-of-the art architectures, while learning the correct resolution in each layer automatically. We evaluate our N-Jet layer on both classification and segmentation, and we show that learning sigma is especially beneficial for inputs at multiple sizes.
|
1811.07628
|
Goutam Bhat
|
Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, Michael Felsberg
|
ATOM: Accurate Tracking by Overlap Maximization
|
CVPR 2019 (Oral). Complete code and models are available at
https://github.com/visionml/pytracking
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent years have witnessed astonishing improvements in visual tracking
robustness, the advancements in tracking accuracy have been limited. As the
focus has been directed towards the development of powerful classifiers, the
problem of accurate target state estimation has been largely overlooked. In
fact, most trackers resort to a simple multi-scale search in order to estimate
the target bounding box. We argue that this approach is fundamentally limited
since target estimation is a complex task, requiring high-level knowledge about
the object.
We address this problem by proposing a novel tracking architecture,
consisting of dedicated target estimation and classification components. High
level knowledge is incorporated into the target estimation through extensive
offline learning. Our target estimation component is trained to predict the
overlap between the target object and an estimated bounding box. By carefully
integrating target-specific information, our approach achieves previously
unseen bounding box accuracy. We further introduce a classification component
that is trained online to guarantee high discriminative power in the presence
of distractors. Our final tracking framework sets a new state-of-the-art on
five challenging benchmarks. On the new large-scale TrackingNet dataset, our
tracker ATOM achieves a relative gain of 15% over the previous best approach,
while running at over 30 FPS. Code and models are available at
https://github.com/visionml/pytracking.
|
[
{
"created": "Mon, 19 Nov 2018 11:40:17 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2019 17:56:18 GMT",
"version": "v2"
}
] |
2019-04-12
|
[
[
"Danelljan",
"Martin",
""
],
[
"Bhat",
"Goutam",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Felsberg",
"Michael",
""
]
] |
While recent years have witnessed astonishing improvements in visual tracking robustness, the advancements in tracking accuracy have been limited. As the focus has been directed towards the development of powerful classifiers, the problem of accurate target state estimation has been largely overlooked. In fact, most trackers resort to a simple multi-scale search in order to estimate the target bounding box. We argue that this approach is fundamentally limited since target estimation is a complex task, requiring high-level knowledge about the object. We address this problem by proposing a novel tracking architecture, consisting of dedicated target estimation and classification components. High level knowledge is incorporated into the target estimation through extensive offline learning. Our target estimation component is trained to predict the overlap between the target object and an estimated bounding box. By carefully integrating target-specific information, our approach achieves previously unseen bounding box accuracy. We further introduce a classification component that is trained online to guarantee high discriminative power in the presence of distractors. Our final tracking framework sets a new state-of-the-art on five challenging benchmarks. On the new large-scale TrackingNet dataset, our tracker ATOM achieves a relative gain of 15% over the previous best approach, while running at over 30 FPS. Code and models are available at https://github.com/visionml/pytracking.
|
2004.14503
|
Ji Ma
|
Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall and Ryan McDonald
|
Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic
Question Generation
|
14 pages, 4 figures
| null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major obstacle to the wide-spread adoption of neural retrieval models is
that they require large supervised training sets to surpass traditional
term-based techniques, which are constructed from raw corpora. In this paper,
we propose an approach to zero-shot learning for passage retrieval that uses
synthetic question generation to close this gap. The question generation system
is trained on general domain data, but is applied to documents in the targeted
domain. This allows us to create arbitrarily large, yet noisy, question-passage
relevance pairs that are domain specific. Furthermore, when this is coupled
with a simple hybrid term-neural model, first-stage retrieval performance can
be improved further. Empirically, we show that this is an effective strategy
for building neural passage retrieval models in the absence of large training
corpora. Depending on the domain, this technique can even approach the accuracy
of supervised models.
|
[
{
"created": "Wed, 29 Apr 2020 22:21:31 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Jan 2021 13:29:55 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jan 2021 16:04:12 GMT",
"version": "v3"
}
] |
2021-01-28
|
[
[
"Ma",
"Ji",
""
],
[
"Korotkov",
"Ivan",
""
],
[
"Yang",
"Yinfei",
""
],
[
"Hall",
"Keith",
""
],
[
"McDonald",
"Ryan",
""
]
] |
A major obstacle to the wide-spread adoption of neural retrieval models is that they require large supervised training sets to surpass traditional term-based techniques, which are constructed from raw corpora. In this paper, we propose an approach to zero-shot learning for passage retrieval that uses synthetic question generation to close this gap. The question generation system is trained on general domain data, but is applied to documents in the targeted domain. This allows us to create arbitrarily large, yet noisy, question-passage relevance pairs that are domain specific. Furthermore, when this is coupled with a simple hybrid term-neural model, first-stage retrieval performance can be improved further. Empirically, we show that this is an effective strategy for building neural passage retrieval models in the absence of large training corpora. Depending on the domain, this technique can even approach the accuracy of supervised models.
|
2203.15215
|
Li Ni
|
Li Ni, Hefei Xu, Yiwen Zhang and Wenjian Luo
|
Spatial-Aware Local Community Detection Guided by Dominance Relation
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of finding the spatial-aware community for a given node has been
defined and investigated in geo-social networks. However, existing studies
suffer from two limitations: a) the criteria of defining communities are
determined by parameters, which are difficult to set; b) algorithms may require
global information and are not suitable for situations where the network is
incomplete. Therefore, we propose spatial-aware local community detection
(SLCD), which finds the spatial-aware local community with only local
information and defines the community based on the difference in the sparseness
of edges inside and outside the community. Specifically, to address the SLCD
problem, we design a novel spatial aware local community detection algorithm
based on dominance relation, but this algorithm incurs high cost. To further
improve the efficiency, we propose an approximate algorithm. Experimental
results demonstrate that the proposed approximate algorithm outperforms the
comparison algorithms.
|
[
{
"created": "Tue, 29 Mar 2022 03:16:14 GMT",
"version": "v1"
}
] |
2022-03-30
|
[
[
"Ni",
"Li",
""
],
[
"Xu",
"Hefei",
""
],
[
"Zhang",
"Yiwen",
""
],
[
"Luo",
"Wenjian",
""
]
] |
The problem of finding the spatial-aware community for a given node has been defined and investigated in geo-social networks. However, existing studies suffer from two limitations: a) the criteria of defining communities are determined by parameters, which are difficult to set; b) algorithms may require global information and are not suitable for situations where the network is incomplete. Therefore, we propose spatial-aware local community detection (SLCD), which finds the spatial-aware local community with only local information and defines the community based on the difference in the sparseness of edges inside and outside the community. Specifically, to address the SLCD problem, we design a novel spatial aware local community detection algorithm based on dominance relation, but this algorithm incurs high cost. To further improve the efficiency, we propose an approximate algorithm. Experimental results demonstrate that the proposed approximate algorithm outperforms the comparison algorithms.
|
2203.08565
|
Valentin Hofmann
|
Valentin Hofmann, Goran Glava\v{s}, Nikola Ljube\v{s}i\'c, Janet B.
Pierrehumbert, Hinrich Sch\"utze
|
Geographic Adaptation of Pretrained Language Models
|
TACL 2024 (pre-MIT Press publication version)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While pretrained language models (PLMs) have been shown to possess a plethora
of linguistic knowledge, the existing body of research has largely neglected
extralinguistic knowledge, which is generally difficult to obtain by
pretraining on text alone. Here, we contribute to closing this gap by examining
geolinguistic knowledge, i.e., knowledge about geographic variation in
language. We introduce geoadaptation, an intermediate training step that
couples language modeling with geolocation prediction in a multi-task learning
setup. We geoadapt four PLMs, covering language groups from three geographic
areas, and evaluate them on five different tasks: fine-tuned (i.e., supervised)
geolocation prediction, zero-shot (i.e., unsupervised) geolocation prediction,
fine-tuned language identification, zero-shot language identification, and
zero-shot prediction of dialect features. Geoadaptation is very successful at
injecting geolinguistic knowledge into the PLMs: the geoadapted PLMs
consistently outperform PLMs adapted using only language modeling (by
especially wide margins on zero-shot prediction tasks), and we obtain new
state-of-the-art results on two benchmarks for geolocation prediction and
language identification. Furthermore, we show that the effectiveness of
geoadaptation stems from its ability to geographically retrofit the
representation space of the PLMs.
|
[
{
"created": "Wed, 16 Mar 2022 11:55:00 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Jan 2023 00:20:48 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Jan 2024 22:57:45 GMT",
"version": "v3"
}
] |
2024-01-30
|
[
[
"Hofmann",
"Valentin",
""
],
[
"Glavaš",
"Goran",
""
],
[
"Ljubešić",
"Nikola",
""
],
[
"Pierrehumbert",
"Janet B.",
""
],
[
"Schütze",
"Hinrich",
""
]
] |
While pretrained language models (PLMs) have been shown to possess a plethora of linguistic knowledge, the existing body of research has largely neglected extralinguistic knowledge, which is generally difficult to obtain by pretraining on text alone. Here, we contribute to closing this gap by examining geolinguistic knowledge, i.e., knowledge about geographic variation in language. We introduce geoadaptation, an intermediate training step that couples language modeling with geolocation prediction in a multi-task learning setup. We geoadapt four PLMs, covering language groups from three geographic areas, and evaluate them on five different tasks: fine-tuned (i.e., supervised) geolocation prediction, zero-shot (i.e., unsupervised) geolocation prediction, fine-tuned language identification, zero-shot language identification, and zero-shot prediction of dialect features. Geoadaptation is very successful at injecting geolinguistic knowledge into the PLMs: the geoadapted PLMs consistently outperform PLMs adapted using only language modeling (by especially wide margins on zero-shot prediction tasks), and we obtain new state-of-the-art results on two benchmarks for geolocation prediction and language identification. Furthermore, we show that the effectiveness of geoadaptation stems from its ability to geographically retrofit the representation space of the PLMs.
|
2203.11604
|
Pawel Sroka
|
Pawe{\l} Sroka, Pawe{\l} Kryszkiewicz, Adrian Kliks
|
Radio Environment Maps for Dynamic Frequency Selection in V2X
Communications
| null |
2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring),
2020
|
10.1109/VTC2020-Spring48590.2020.9128655
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the concept of database supported Vehicular
Dynamic Spectrum Access (VDSA) for platooning. As various researchers show that
the 5.9 GHz band, devoted for Intelligent Transportation Systems, may suffer
from congestion of the channel, we propose to offload part of this traffic to
white-spaces with the guidance of the active database system. In our work, we
describe our measurement campaign which delivered data for population of the
dedicated radio environment map. Once the map is created, it was used in three
proposed algorithms for VDSA: an optimal and two pragmatic approaches.
|
[
{
"created": "Tue, 22 Mar 2022 10:39:40 GMT",
"version": "v1"
}
] |
2022-03-23
|
[
[
"Sroka",
"Paweł",
""
],
[
"Kryszkiewicz",
"Paweł",
""
],
[
"Kliks",
"Adrian",
""
]
] |
In this paper, we investigate the concept of database supported Vehicular Dynamic Spectrum Access (VDSA) for platooning. As various researchers show that the 5.9 GHz band, devoted for Intelligent Transportation Systems, may suffer from congestion of the channel, we propose to offload part of this traffic to white-spaces with the guidance of the active database system. In our work, we describe our measurement campaign which delivered data for population of the dedicated radio environment map. Once the map is created, it was used in three proposed algorithms for VDSA: an optimal and two pragmatic approaches.
|
2203.00508
|
Zhong Tian
|
Zhong Tian, Zhengchuan Chen, Min Wang, Yunjian Jia, and Wanli Wen
|
Reconfigurable Intelligent Surface-Aided Spectrum Sharing Coexisting
with Multiple Primary Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Considering the spectrum sharing system (SSS) coexisting with multiple
primary networks, we have employed a well-designed reconfigurable intelligent
surface (RIS) to control the radio environments of wireless channels and
relieve the scarcity of the spectrum resource in this work. Specifically, the
enhancement of the spectral efficiency of the secondary user in the considered
SSS is decomposed into two subproblems which are a second-order cone
programming (SOCP) and a fractional programming of the convex quadratic form
(CQFP), respectively, to optimize alternatively the beamforming vector at the
secondary access point (S-AP) and the reflecting coefficients at the RIS. The
SOCP subproblem is shown as a concave problem, which can be solved optimally
using standard convex optimization tools. The CQFP subproblem can be solved by
a low-complexity method of gradient-based linearization with domain (GLD),
providing a sub-optimal solution for fast deployment. Taking the discrete phase
control at the RIS into account, a nearest point searching with penalty (NPSP)
method is also developed, realizing the discretization of the phase shifts of
the RIS in practice. The simulation results indicate that both GLD and NPSP can
achieve an excellent performance.
|
[
{
"created": "Tue, 1 Mar 2022 14:53:13 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Nov 2022 04:55:35 GMT",
"version": "v2"
}
] |
2022-11-07
|
[
[
"Tian",
"Zhong",
""
],
[
"Chen",
"Zhengchuan",
""
],
[
"Wang",
"Min",
""
],
[
"Jia",
"Yunjian",
""
],
[
"Wen",
"Wanli",
""
]
] |
Considering the spectrum sharing system (SSS) coexisting with multiple primary networks, we have employed a well-designed reconfigurable intelligent surface (RIS) to control the radio environments of wireless channels and relieve the scarcity of the spectrum resource in this work. Specifically, the enhancement of the spectral efficiency of the secondary user in the considered SSS is decomposed into two subproblems which are a second-order cone programming (SOCP) and a fractional programming of the convex quadratic form (CQFP), respectively, to optimize alternatively the beamforming vector at the secondary access point (S-AP) and the reflecting coefficients at the RIS. The SOCP subproblem is shown as a concave problem, which can be solved optimally using standard convex optimization tools. The CQFP subproblem can be solved by a low-complexity method of gradient-based linearization with domain (GLD), providing a sub-optimal solution for fast deployment. Taking the discrete phase control at the RIS into account, a nearest point searching with penalty (NPSP) method is also developed, realizing the discretization of the phase shifts of the RIS in practice. The simulation results indicate that both GLD and NPSP can achieve an excellent performance.
|
1310.5497
|
Jocelyne Troccaz
|
Emmanuel Promayon (TIMC-IMAG), Celine Fouard (TIMC-IMAG), Mathieu
Bailet (TIMC-IMAG), Aurelien Deram (TIMC-IMAG), Gaelle Fiard, Nikolai Hungr
(TIMC-IMAG), Vincent Luboz (TIMC-IMAG), Yohan Payan (TIMC-IMAG), Johan
Sarrazin (TIMC-IMAG), Nicolas Saubat (TIMC-IMAG), Sonia Yuki Selmi
(TIMC-IMAG), Sandrine Voros (TIMC-IMAG), Philippe Cinquin (TIMC-IMAG),
Jocelyne Troccaz (TIMC-IMAG)
|
Using CamiTK for rapid prototyping of interactive Computer Assisted
Medical Intervention applications
| null |
Conference proceedings : Annual International Conference of the
IEEE Engineering in Medicine and Biology Society. 2013 (2013) 4933-6
|
10.1109/EMBC.2013.6610654
| null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer Assisted Medical Intervention (CAMI hereafter) is a complex
multi-disciplinary field. CAMI research requires the collaboration of experts
in several fields as diverse as medicine, computer science, mathematics,
instrumentation, signal processing, mechanics, modeling, automatics, optics,
etc.
|
[
{
"created": "Mon, 21 Oct 2013 10:40:02 GMT",
"version": "v1"
}
] |
2013-10-22
|
[
[
"Promayon",
"Emmanuel",
"",
"TIMC-IMAG"
],
[
"Fouard",
"Celine",
"",
"TIMC-IMAG"
],
[
"Bailet",
"Mathieu",
"",
"TIMC-IMAG"
],
[
"Deram",
"Aurelien",
"",
"TIMC-IMAG"
],
[
"Fiard",
"Gaelle",
"",
"TIMC-IMAG"
],
[
"Hungr",
"Nikolai",
"",
"TIMC-IMAG"
],
[
"Luboz",
"Vincent",
"",
"TIMC-IMAG"
],
[
"Payan",
"Yohan",
"",
"TIMC-IMAG"
],
[
"Sarrazin",
"Johan",
"",
"TIMC-IMAG"
],
[
"Saubat",
"Nicolas",
"",
"TIMC-IMAG"
],
[
"Selmi",
"Sonia Yuki",
"",
"TIMC-IMAG"
],
[
"Voros",
"Sandrine",
"",
"TIMC-IMAG"
],
[
"Cinquin",
"Philippe",
"",
"TIMC-IMAG"
],
[
"Troccaz",
"Jocelyne",
"",
"TIMC-IMAG"
]
] |
Computer Assisted Medical Intervention (CAMI hereafter) is a complex multi-disciplinary field. CAMI research requires the collaboration of experts in several fields as diverse as medicine, computer science, mathematics, instrumentation, signal processing, mechanics, modeling, automatics, optics, etc.
|
2406.03894
|
Yaozhong Gan
|
Yaozhong Gan, Renye Yan, Xiaoyang Tan, Zhe Wu, Junliang Xing
|
Transductive Off-policy Proximal Policy Optimization
|
18
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Proximal Policy Optimization (PPO) is a popular model-free reinforcement
learning algorithm, esteemed for its simplicity and efficacy. However, due to
its inherent on-policy nature, its proficiency in harnessing data from
disparate policies is constrained. This paper introduces a novel off-policy
extension to the original PPO method, christened Transductive Off-policy PPO
(ToPPO). Herein, we provide theoretical justification for incorporating
off-policy data in PPO training and prudent guidelines for its safe
application. Our contribution includes a novel formulation of the policy
improvement lower bound for prospective policies derived from off-policy data,
accompanied by a computationally efficient mechanism to optimize this bound,
underpinned by assurances of monotonic improvement. Comprehensive experimental
results across six representative tasks underscore ToPPO's promising
performance.
|
[
{
"created": "Thu, 6 Jun 2024 09:29:40 GMT",
"version": "v1"
}
] |
2024-06-07
|
[
[
"Gan",
"Yaozhong",
""
],
[
"Yan",
"Renye",
""
],
[
"Tan",
"Xiaoyang",
""
],
[
"Wu",
"Zhe",
""
],
[
"Xing",
"Junliang",
""
]
] |
Proximal Policy Optimization (PPO) is a popular model-free reinforcement learning algorithm, esteemed for its simplicity and efficacy. However, due to its inherent on-policy nature, its proficiency in harnessing data from disparate policies is constrained. This paper introduces a novel off-policy extension to the original PPO method, christened Transductive Off-policy PPO (ToPPO). Herein, we provide theoretical justification for incorporating off-policy data in PPO training and prudent guidelines for its safe application. Our contribution includes a novel formulation of the policy improvement lower bound for prospective policies derived from off-policy data, accompanied by a computationally efficient mechanism to optimize this bound, underpinned by assurances of monotonic improvement. Comprehensive experimental results across six representative tasks underscore ToPPO's promising performance.
|
2009.02018
|
DongGyu Joo
|
Doyeon Kim, Donggyu Joo, Junmo Kim
|
TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary
Generator
|
IEEE Access
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in technology have led to the development of methods that can create
desired visual multimedia. In particular, image generation using deep learning
has been extensively studied across diverse fields. In comparison, video
generation, especially on conditional inputs, remains a challenging and less
explored area. To narrow this gap, we aim to train our model to produce a video
corresponding to a given text description. We propose a novel training
framework, Text-to-Image-to-Video Generative Adversarial Network (TiVGAN),
which evolves frame-by-frame and finally produces a full-length video. In the
first phase, we focus on creating a high-quality single video frame while
learning the relationship between the text and an image. As the steps proceed,
our model is trained gradually on more number of consecutive frames.This
step-by-step learning process helps stabilize the training and enables the
creation of high-resolution video based on conditional text descriptions.
Qualitative and quantitative experimental results on various datasets
demonstrate the effectiveness of the proposed method.
|
[
{
"created": "Fri, 4 Sep 2020 06:33:08 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Jun 2021 00:25:23 GMT",
"version": "v2"
}
] |
2021-06-29
|
[
[
"Kim",
"Doyeon",
""
],
[
"Joo",
"Donggyu",
""
],
[
"Kim",
"Junmo",
""
]
] |
Advances in technology have led to the development of methods that can create desired visual multimedia. In particular, image generation using deep learning has been extensively studied across diverse fields. In comparison, video generation, especially on conditional inputs, remains a challenging and less explored area. To narrow this gap, we aim to train our model to produce a video corresponding to a given text description. We propose a novel training framework, Text-to-Image-to-Video Generative Adversarial Network (TiVGAN), which evolves frame-by-frame and finally produces a full-length video. In the first phase, we focus on creating a high-quality single video frame while learning the relationship between the text and an image. As the steps proceed, our model is trained gradually on more number of consecutive frames.This step-by-step learning process helps stabilize the training and enables the creation of high-resolution video based on conditional text descriptions. Qualitative and quantitative experimental results on various datasets demonstrate the effectiveness of the proposed method.
|
2008.09817
|
Elizabeth Huang
|
Elizabeth Y. Huang and Dario Paccagnan and Wenjun Mei and Francesco
Bullo
|
Assign and Appraise: Achieving Optimal Performance in Collaborative
Teams
| null | null | null | null |
cs.SI cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tackling complex team problems requires understanding each team member's
skills in order to devise a task assignment maximizing the team performance.
This paper proposes a novel quantitative model describing the decentralized
process by which individuals in a team learn who has what abilities, while
concurrently assigning tasks to each of the team members. In the model, the
appraisal network represents team member's evaluations of one another and each
team member chooses their own workload. The appraisals and workload assignment
change simultaneously: each member builds their own local appraisal of
neighboring members based on the performance exhibited on previous tasks, while
the workload is redistributed based on the current appraisal estimates. We show
that the appraisal states can be reduced to a lower dimension due to the
presence of conserved quantities associated to the cycles of the appraisal
network. Building on this, we provide rigorous results characterizing the
ability, or inability, of the team to learn each other's skill and thus
converge to an allocation maximizing the team performance. We complement our
analysis with extensive numerical experiments.
|
[
{
"created": "Sat, 22 Aug 2020 11:39:09 GMT",
"version": "v1"
}
] |
2020-08-25
|
[
[
"Huang",
"Elizabeth Y.",
""
],
[
"Paccagnan",
"Dario",
""
],
[
"Mei",
"Wenjun",
""
],
[
"Bullo",
"Francesco",
""
]
] |
Tackling complex team problems requires understanding each team member's skills in order to devise a task assignment maximizing the team performance. This paper proposes a novel quantitative model describing the decentralized process by which individuals in a team learn who has what abilities, while concurrently assigning tasks to each of the team members. In the model, the appraisal network represents team member's evaluations of one another and each team member chooses their own workload. The appraisals and workload assignment change simultaneously: each member builds their own local appraisal of neighboring members based on the performance exhibited on previous tasks, while the workload is redistributed based on the current appraisal estimates. We show that the appraisal states can be reduced to a lower dimension due to the presence of conserved quantities associated to the cycles of the appraisal network. Building on this, we provide rigorous results characterizing the ability, or inability, of the team to learn each other's skill and thus converge to an allocation maximizing the team performance. We complement our analysis with extensive numerical experiments.
|
1608.01373
|
Lin Li
|
Lin Li and W.M. Campbell
|
Matching Community Structure Across Online Social Networks
| null |
Workshop on Networks in the Social and Information Sciences, NIPS
2015
| null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The discovery of community structure in networks is a problem of considerable
interest in recent years. In online social networks, often times, users are
simultaneously involved in multiple social media sites, some of which share
common social relationships. It is of great interest to uncover a shared
community structure across these networks. However, in reality, users typically
identify themselves with different usernames across social media sites. This
creates a great difficulty in detecting the community structure. In this paper,
we explore several approaches for community detection across online social
networks with limited knowledge of username alignment across the networks. We
refer to the known alignment of usernames as seeds. We investigate strategies
for seed selection and its impact on networks with a different fraction of
overlapping vertices. The goal is to study the interplay between network
topologies and seed selection strategies, and to understand how it affects the
detected community structure. We also propose several measures to assess the
performance of community detection and use them to measure the quality of the
detected communities in both Twitter-Twitter networks and Twitter-Instagram
networks.
|
[
{
"created": "Wed, 3 Aug 2016 22:02:29 GMT",
"version": "v1"
}
] |
2016-08-05
|
[
[
"Li",
"Lin",
""
],
[
"Campbell",
"W. M.",
""
]
] |
The discovery of community structure in networks is a problem of considerable interest in recent years. In online social networks, often times, users are simultaneously involved in multiple social media sites, some of which share common social relationships. It is of great interest to uncover a shared community structure across these networks. However, in reality, users typically identify themselves with different usernames across social media sites. This creates a great difficulty in detecting the community structure. In this paper, we explore several approaches for community detection across online social networks with limited knowledge of username alignment across the networks. We refer to the known alignment of usernames as seeds. We investigate strategies for seed selection and its impact on networks with a different fraction of overlapping vertices. The goal is to study the interplay between network topologies and seed selection strategies, and to understand how it affects the detected community structure. We also propose several measures to assess the performance of community detection and use them to measure the quality of the detected communities in both Twitter-Twitter networks and Twitter-Instagram networks.
|
1711.00244
|
Anamitra R. Choudhury
|
Dharma Teja Vooturi, Saurabh Goyal, Anamitra R. Choudhury, Yogish
Sabharwal, Ashish Verma
|
Efficient Inferencing of Compressed Deep Neural Networks
| null | null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large number of weights in deep neural networks makes the models difficult to
be deployed in low memory environments such as, mobile phones, IOT edge devices
as well as "inferencing as a service" environments on cloud. Prior work has
considered reduction in the size of the models, through compression techniques
like pruning, quantization, Huffman encoding etc. However, efficient
inferencing using the compressed models has received little attention,
specially with the Huffman encoding in place. In this paper, we propose
efficient parallel algorithms for inferencing of single image and batches,
under various memory constraints. Our experimental results show that our
approach of using variable batch size for inferencing achieves 15-25\%
performance improvement in the inference throughput for AlexNet, while
maintaining memory and latency constraints.
|
[
{
"created": "Wed, 1 Nov 2017 08:16:40 GMT",
"version": "v1"
}
] |
2017-11-02
|
[
[
"Vooturi",
"Dharma Teja",
""
],
[
"Goyal",
"Saurabh",
""
],
[
"Choudhury",
"Anamitra R.",
""
],
[
"Sabharwal",
"Yogish",
""
],
[
"Verma",
"Ashish",
""
]
] |
Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as "inferencing as a service" environments on cloud. Prior work has considered reduction in the size of the models, through compression techniques like pruning, quantization, Huffman encoding etc. However, efficient inferencing using the compressed models has received little attention, specially with the Huffman encoding in place. In this paper, we propose efficient parallel algorithms for inferencing of single image and batches, under various memory constraints. Our experimental results show that our approach of using variable batch size for inferencing achieves 15-25\% performance improvement in the inference throughput for AlexNet, while maintaining memory and latency constraints.
|
2211.05446
|
Meng Chen
|
Meng Chen, Li Lu, Jiadi Yu, Yingying Chen, Zhongjie Ba, Feng Lin, Kui
Ren
|
Privacy-Utility Balanced Voice De-Identification Using Adversarial
Examples
| null | null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Faced with the threat of identity leakage during voice data publishing, users
are engaged in a privacy-utility dilemma when enjoying convenient voice
services. Existing studies employ direct modification or text-based
re-synthesis to de-identify users' voices, but resulting in inconsistent
audibility in the presence of human participants. In this paper, we propose a
voice de-identification system, which uses adversarial examples to balance the
privacy and utility of voice services. Instead of typical additive examples
inducing perceivable distortions, we design a novel convolutional adversarial
example that modulates perturbations into real-world room impulse responses.
Benefit from this, our system could preserve user identity from exposure by
Automatic Speaker Identification (ASI) while remaining the voice perceptual
quality for non-intrusive de-identification. Moreover, our system learns a
compact speaker distribution through a conditional variational auto-encoder to
sample diverse target embeddings on demand. Combining diverse target generation
and input-specific perturbation construction, our system enables any-to-any
identify transformation for adaptive de-identification. Experimental results
show that our system could achieve 98% and 79% successful de-identification on
mainstream ASIs and commercial systems with an objective Mel cepstral
distortion of 4.31dB and a subjective mean opinion score of 4.48.
|
[
{
"created": "Thu, 10 Nov 2022 09:35:58 GMT",
"version": "v1"
}
] |
2022-11-11
|
[
[
"Chen",
"Meng",
""
],
[
"Lu",
"Li",
""
],
[
"Yu",
"Jiadi",
""
],
[
"Chen",
"Yingying",
""
],
[
"Ba",
"Zhongjie",
""
],
[
"Lin",
"Feng",
""
],
[
"Ren",
"Kui",
""
]
] |
Faced with the threat of identity leakage during voice data publishing, users are engaged in a privacy-utility dilemma when enjoying convenient voice services. Existing studies employ direct modification or text-based re-synthesis to de-identify users' voices, but resulting in inconsistent audibility in the presence of human participants. In this paper, we propose a voice de-identification system, which uses adversarial examples to balance the privacy and utility of voice services. Instead of typical additive examples inducing perceivable distortions, we design a novel convolutional adversarial example that modulates perturbations into real-world room impulse responses. Benefit from this, our system could preserve user identity from exposure by Automatic Speaker Identification (ASI) while remaining the voice perceptual quality for non-intrusive de-identification. Moreover, our system learns a compact speaker distribution through a conditional variational auto-encoder to sample diverse target embeddings on demand. Combining diverse target generation and input-specific perturbation construction, our system enables any-to-any identify transformation for adaptive de-identification. Experimental results show that our system could achieve 98% and 79% successful de-identification on mainstream ASIs and commercial systems with an objective Mel cepstral distortion of 4.31dB and a subjective mean opinion score of 4.48.
|
1802.08130
|
Jos\'e Vuelvas
|
Jos\'e Vuelvas and Fredy Ruiz
|
A novel incentive-based demand response model for Cournot competition in
electricity markets
| null |
Vuelvas, J. & Ruiz, F. Energy Syst (2018).
https://doi.org/10.1007/s12667-018-0271-2
| null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an analysis of competition between generators when
incentive-based demand response is employed in an electricity market. Thermal
and hydropower generation are considered in the model. A smooth inverse demand
function is designed using a sigmoid and two linear functions for modeling the
consumer preferences under incentive-based demand response program. Generators
compete to sell energy bilaterally to consumers and system operator provides
transmission and arbitrage services. The profit of each agent is posed as an
optimization problem, then the competition result is found by solving
simultaneously Karush-Kuhn-Tucker conditions for all generators. A Nash-Cournot
equilibrium is found when the system operates normally and at peak demand times
when DR is required. Under this model, results show that DR diminishes the
energy consumption at peak periods, shifts the power requirement to off-peak
times and improves the net consumer surplus due to incentives received for
participating in DR program. However, the generators decrease their profit due
to the reduction of traded energy and market prices.
|
[
{
"created": "Thu, 22 Feb 2018 16:12:09 GMT",
"version": "v1"
}
] |
2018-02-23
|
[
[
"Vuelvas",
"José",
""
],
[
"Ruiz",
"Fredy",
""
]
] |
This paper presents an analysis of competition between generators when incentive-based demand response is employed in an electricity market. Thermal and hydropower generation are considered in the model. A smooth inverse demand function is designed using a sigmoid and two linear functions for modeling the consumer preferences under incentive-based demand response program. Generators compete to sell energy bilaterally to consumers and system operator provides transmission and arbitrage services. The profit of each agent is posed as an optimization problem, then the competition result is found by solving simultaneously Karush-Kuhn-Tucker conditions for all generators. A Nash-Cournot equilibrium is found when the system operates normally and at peak demand times when DR is required. Under this model, results show that DR diminishes the energy consumption at peak periods, shifts the power requirement to off-peak times and improves the net consumer surplus due to incentives received for participating in DR program. However, the generators decrease their profit due to the reduction of traded energy and market prices.
|
2210.16074
|
David Biesner
|
David Biesner, Helen Schneider, Benjamin Wulff, Ulrike Attenberger,
Rafet Sifa
|
Improving Chest X-Ray Classification by RNN-based Patient Monitoring
|
To be published in proceedings of IEEE International Conference on
Machine Learning Applications IEEE ICMLA 2022
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Chest X-Ray imaging is one of the most common radiological tools for
detection of various pathologies related to the chest area and lung function.
In a clinical setting, automated assessment of chest radiographs has the
potential of assisting physicians in their decision making process and optimize
clinical workflows, for example by prioritizing emergency patients.
Most work analyzing the potential of machine learning models to classify
chest X-ray images focuses on vision methods processing and predicting
pathologies for one image at a time. However, many patients undergo such a
procedure multiple times during course of a treatment or during a single
hospital stay. The patient history, that is previous images and especially the
corresponding diagnosis contain useful information that can aid a
classification system in its prediction.
In this study, we analyze how information about diagnosis can improve
CNN-based image classification models by constructing a novel dataset from the
well studied CheXpert dataset of chest X-rays. We show that a model trained on
additional patient history information outperforms a model trained without the
information by a significant margin.
We provide code to replicate the dataset creation and model training.
|
[
{
"created": "Fri, 28 Oct 2022 11:47:15 GMT",
"version": "v1"
}
] |
2022-10-31
|
[
[
"Biesner",
"David",
""
],
[
"Schneider",
"Helen",
""
],
[
"Wulff",
"Benjamin",
""
],
[
"Attenberger",
"Ulrike",
""
],
[
"Sifa",
"Rafet",
""
]
] |
Chest X-Ray imaging is one of the most common radiological tools for detection of various pathologies related to the chest area and lung function. In a clinical setting, automated assessment of chest radiographs has the potential of assisting physicians in their decision making process and optimize clinical workflows, for example by prioritizing emergency patients. Most work analyzing the potential of machine learning models to classify chest X-ray images focuses on vision methods processing and predicting pathologies for one image at a time. However, many patients undergo such a procedure multiple times during course of a treatment or during a single hospital stay. The patient history, that is previous images and especially the corresponding diagnosis contain useful information that can aid a classification system in its prediction. In this study, we analyze how information about diagnosis can improve CNN-based image classification models by constructing a novel dataset from the well studied CheXpert dataset of chest X-rays. We show that a model trained on additional patient history information outperforms a model trained without the information by a significant margin. We provide code to replicate the dataset creation and model training.
|
1501.02967
|
Thanh Bui
|
Thanh Bui
|
Analysis of Docker Security
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last few years, the use of virtualization technologies has increased
dramatically. This makes the demand for efficient and secure virtualization
solutions become more obvious. Container-based virtualization and
hypervisor-based virtualization are two main types of virtualization
technologies that have emerged to the market. Of these two classes,
container-based virtualization is able to provide a more lightweight and
efficient virtual environment, but not without security concerns. In this
paper, we analyze the security level of Docker, a well-known representative of
container-based approaches. The analysis considers two areas: (1) the internal
security of Docker, and (2) how Docker interacts with the security features of
the Linux kernel, such as SELinux and AppArmor, in order to harden the host
system. Furthermore, the paper also discusses and identifies what could be done
when using Docker to increase its level of security.
|
[
{
"created": "Tue, 13 Jan 2015 11:44:02 GMT",
"version": "v1"
}
] |
2015-01-14
|
[
[
"Bui",
"Thanh",
""
]
] |
Over the last few years, the use of virtualization technologies has increased dramatically. This makes the demand for efficient and secure virtualization solutions become more obvious. Container-based virtualization and hypervisor-based virtualization are two main types of virtualization technologies that have emerged to the market. Of these two classes, container-based virtualization is able to provide a more lightweight and efficient virtual environment, but not without security concerns. In this paper, we analyze the security level of Docker, a well-known representative of container-based approaches. The analysis considers two areas: (1) the internal security of Docker, and (2) how Docker interacts with the security features of the Linux kernel, such as SELinux and AppArmor, in order to harden the host system. Furthermore, the paper also discusses and identifies what could be done when using Docker to increase its level of security.
|
2105.04328
|
Indrajit Kurmi
|
D.C. Schedl, I. Kurmi, and O. Bimber
|
An Autonomous Drone for Search and Rescue in Forests using Airborne
Optical Sectioning
|
21 pages, 9 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Drones will play an essential role in human-machine teaming in future search
and rescue (SAR) missions. We present a first prototype that finds people fully
autonomously in densely occluded forests. In the course of 17 field experiments
conducted over various forest types and under different flying conditions, our
drone found 38 out of 42 hidden persons; average precision was 86% for
predefined flight paths, while adaptive path planning (where potential findings
are double-checked) increased confidence by 15%. Image processing,
classification, and dynamic flight-path adaptation are computed onboard in
real-time and while flying. Our finding that deep-learning-based person
classification is unaffected by sparse and error-prone sampling within
one-dimensional synthetic apertures allows flights to be shortened and reduces
recording requirements to one-tenth of the number of images needed for sampling
using two-dimensional synthetic apertures. The goal of our adaptive path
planning is to find people as reliably and quickly as possible, which is
essential in time-critical applications, such as SAR. Our drone enables SAR
operations in remote areas without stable network coverage, as it transmits to
the rescue team only classification results that indicate detections and can
thus operate with intermittent minimal-bandwidth connections (e.g., by
satellite). Once received, these results can be visually enhanced for
interpretation on remote mobile devices.
|
[
{
"created": "Mon, 10 May 2021 13:05:22 GMT",
"version": "v1"
}
] |
2021-05-11
|
[
[
"Schedl",
"D. C.",
""
],
[
"Kurmi",
"I.",
""
],
[
"Bimber",
"O.",
""
]
] |
Drones will play an essential role in human-machine teaming in future search and rescue (SAR) missions. We present a first prototype that finds people fully autonomously in densely occluded forests. In the course of 17 field experiments conducted over various forest types and under different flying conditions, our drone found 38 out of 42 hidden persons; average precision was 86% for predefined flight paths, while adaptive path planning (where potential findings are double-checked) increased confidence by 15%. Image processing, classification, and dynamic flight-path adaptation are computed onboard in real-time and while flying. Our finding that deep-learning-based person classification is unaffected by sparse and error-prone sampling within one-dimensional synthetic apertures allows flights to be shortened and reduces recording requirements to one-tenth of the number of images needed for sampling using two-dimensional synthetic apertures. The goal of our adaptive path planning is to find people as reliably and quickly as possible, which is essential in time-critical applications, such as SAR. Our drone enables SAR operations in remote areas without stable network coverage, as it transmits to the rescue team only classification results that indicate detections and can thus operate with intermittent minimal-bandwidth connections (e.g., by satellite). Once received, these results can be visually enhanced for interpretation on remote mobile devices.
|
1201.1812
|
Jiun-Hung Yu
|
Jiun-Hung Yu and Hans-Andrea Loeliger
|
On Polynomial Remainder Codes
| null | null | null | null |
cs.IT math.IT math.RA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polynomial remainder codes are a large class of codes derived from the
Chinese remainder theorem that includes Reed-Solomon codes as a special case.
In this paper, we revisit these codes and study them more carefully than in
previous work. We explicitly allow the code symbols to be polynomials of
different degrees, which leads to two different notions of weight and distance.
Algebraic decoding is studied in detail. If the moduli are not irreducible,
the notion of an error locator polynomial is replaced by an error factor
polynomial. We then obtain a collection of gcd-based decoding algorithms, some
of which are not quite standard even when specialized to Reed-Solomon codes.
|
[
{
"created": "Mon, 9 Jan 2012 16:00:45 GMT",
"version": "v1"
}
] |
2012-01-10
|
[
[
"Yu",
"Jiun-Hung",
""
],
[
"Loeliger",
"Hans-Andrea",
""
]
] |
Polynomial remainder codes are a large class of codes derived from the Chinese remainder theorem that includes Reed-Solomon codes as a special case. In this paper, we revisit these codes and study them more carefully than in previous work. We explicitly allow the code symbols to be polynomials of different degrees, which leads to two different notions of weight and distance. Algebraic decoding is studied in detail. If the moduli are not irreducible, the notion of an error locator polynomial is replaced by an error factor polynomial. We then obtain a collection of gcd-based decoding algorithms, some of which are not quite standard even when specialized to Reed-Solomon codes.
|
1210.6685
|
Guodong Shi
|
Guodong Shi, Alexandre Proutiere and Karl Henrik Johansson
|
Distributed Optimization: Convergence Conditions from a Dynamical System
Perspective
| null | null | null | null |
cs.SY cs.DC math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper explores the fundamental properties of distributed minimization of
a sum of functions with each function only known to one node, and a
pre-specified level of node knowledge and computational capacity. We define the
optimization information each node receives from its objective function, the
neighboring information each node receives from its neighbors, and the
computational capacity each node can take advantage of in controlling its
state. It is proven that there exist a neighboring information way and a
control law that guarantee global optimal consensus if and only if the solution
sets of the local objective functions admit a nonempty intersection set for
fixed strongly connected graphs. Then we show that for any tolerated error, we
can find a control law that guarantees global optimal consensus within this
error for fixed, bidirectional, and connected graphs under mild conditions. For
time-varying graphs, we show that optimal consensus can always be achieved as
long as the graph is uniformly jointly strongly connected and the nonempty
intersection condition holds. The results illustrate that nonempty intersection
for the local optimal solution sets is a critical condition for successful
distributed optimization for a large class of algorithms.
|
[
{
"created": "Wed, 24 Oct 2012 21:28:36 GMT",
"version": "v1"
}
] |
2012-10-26
|
[
[
"Shi",
"Guodong",
""
],
[
"Proutiere",
"Alexandre",
""
],
[
"Johansson",
"Karl Henrik",
""
]
] |
This paper explores the fundamental properties of distributed minimization of a sum of functions with each function only known to one node, and a pre-specified level of node knowledge and computational capacity. We define the optimization information each node receives from its objective function, the neighboring information each node receives from its neighbors, and the computational capacity each node can take advantage of in controlling its state. It is proven that there exist a neighboring information way and a control law that guarantee global optimal consensus if and only if the solution sets of the local objective functions admit a nonempty intersection set for fixed strongly connected graphs. Then we show that for any tolerated error, we can find a control law that guarantees global optimal consensus within this error for fixed, bidirectional, and connected graphs under mild conditions. For time-varying graphs, we show that optimal consensus can always be achieved as long as the graph is uniformly jointly strongly connected and the nonempty intersection condition holds. The results illustrate that nonempty intersection for the local optimal solution sets is a critical condition for successful distributed optimization for a large class of algorithms.
|
2310.11960
|
Yanming Kang
|
Yanming Kang, Giang Tran, Hans De Sterck
|
Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for
Long Sequences
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based models have achieved state-of-the-art performance in many
areas. However, the quadratic complexity of self-attention with respect to the
input length hinders the applicability of Transformer-based models to long
sequences. To address this, we present Fast Multipole Attention, a new
attention mechanism that uses a divide-and-conquer strategy to reduce the time
and memory complexity of attention for sequences of length $n$ from
$\mathcal{O}(n^2)$ to $\mathcal{O}(n \log n)$ or $O(n)$, while retaining a
global receptive field. The hierarchical approach groups queries, keys, and
values into $\mathcal{O}( \log n)$ levels of resolution, where groups at
greater distances are increasingly larger in size and the weights to compute
group quantities are learned. As such, the interaction between tokens far from
each other is considered in lower resolution in an efficient hierarchical
manner. The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$
or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled
or not. This multi-level divide-and-conquer strategy is inspired by fast
summation methods from $n$-body physics and the Fast Multipole Method. We
perform evaluation on autoregressive and bidirectional language modeling tasks
and compare our Fast Multipole Attention model with other efficient attention
variants on medium-size datasets. We find empirically that the Fast Multipole
Transformer performs much better than other efficient transformers in terms of
memory size and accuracy. The Fast Multipole Attention mechanism has the
potential to empower large language models with much greater sequence lengths,
taking the full context into account in an efficient, naturally hierarchical
manner during training and when generating long sequences.
|
[
{
"created": "Wed, 18 Oct 2023 13:40:41 GMT",
"version": "v1"
},
{
"created": "Sat, 21 Oct 2023 01:56:32 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jul 2024 15:02:51 GMT",
"version": "v3"
}
] |
2024-07-31
|
[
[
"Kang",
"Yanming",
""
],
[
"Tran",
"Giang",
""
],
[
"De Sterck",
"Hans",
""
]
] |
Transformer-based models have achieved state-of-the-art performance in many areas. However, the quadratic complexity of self-attention with respect to the input length hinders the applicability of Transformer-based models to long sequences. To address this, we present Fast Multipole Attention, a new attention mechanism that uses a divide-and-conquer strategy to reduce the time and memory complexity of attention for sequences of length $n$ from $\mathcal{O}(n^2)$ to $\mathcal{O}(n \log n)$ or $O(n)$, while retaining a global receptive field. The hierarchical approach groups queries, keys, and values into $\mathcal{O}( \log n)$ levels of resolution, where groups at greater distances are increasingly larger in size and the weights to compute group quantities are learned. As such, the interaction between tokens far from each other is considered in lower resolution in an efficient hierarchical manner. The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled or not. This multi-level divide-and-conquer strategy is inspired by fast summation methods from $n$-body physics and the Fast Multipole Method. We perform evaluation on autoregressive and bidirectional language modeling tasks and compare our Fast Multipole Attention model with other efficient attention variants on medium-size datasets. We find empirically that the Fast Multipole Transformer performs much better than other efficient transformers in terms of memory size and accuracy. The Fast Multipole Attention mechanism has the potential to empower large language models with much greater sequence lengths, taking the full context into account in an efficient, naturally hierarchical manner during training and when generating long sequences.
|
1910.14026
|
Federico Orsini
|
Federico Orsini, Massimiliano Gastaldi, Luca Mantecchini, Riccardo
Rossi
|
Neural networks trained with WiFi traces to predict airport passenger
behavior
|
Post-print of paper presented at the 2019 6th International
Conference on Models and Technologies for Intelligent Transportation Systems
(MT-ITS)
|
2019 6th International Conference on Models and Technologies for
Intelligent Transportation Systems (MT-ITS)
|
10.1109/MTITS.2019.8883365
| null |
cs.LG eess.SP stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of neural networks to predict airport passenger activity choices
inside the terminal is presented in this paper. Three network architectures are
proposed: Feedforward Neural Networks (FNN), Long Short-Term Memory (LSTM)
networks, and a combination of the two. Inputs to these models are both static
(passenger and trip characteristics) and dynamic (real-time passenger
tracking). A real-world case study exemplifies the application of these models,
using anonymous WiFi traces collected at Bologna Airport to train the networks.
The performance of the models were evaluated according to the misclassification
rate of passenger activity choices. In the LSTM approach, two different
multi-step forecasting strategies are tested. According to our findings, the
direct LSTM approach provides better results than the FNN, especially when the
prediction horizon is relatively short (20 minutes or less).
|
[
{
"created": "Wed, 30 Oct 2019 08:11:38 GMT",
"version": "v1"
}
] |
2019-11-01
|
[
[
"Orsini",
"Federico",
""
],
[
"Gastaldi",
"Massimiliano",
""
],
[
"Mantecchini",
"Luca",
""
],
[
"Rossi",
"Riccardo",
""
]
] |
The use of neural networks to predict airport passenger activity choices inside the terminal is presented in this paper. Three network architectures are proposed: Feedforward Neural Networks (FNN), Long Short-Term Memory (LSTM) networks, and a combination of the two. Inputs to these models are both static (passenger and trip characteristics) and dynamic (real-time passenger tracking). A real-world case study exemplifies the application of these models, using anonymous WiFi traces collected at Bologna Airport to train the networks. The performance of the models were evaluated according to the misclassification rate of passenger activity choices. In the LSTM approach, two different multi-step forecasting strategies are tested. According to our findings, the direct LSTM approach provides better results than the FNN, especially when the prediction horizon is relatively short (20 minutes or less).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.